![]() creation of virtual networks spanning multiple public clouds
专利摘要:
Some modalities establish, for an entity, a virtual network through several public clouds from several public cloud providers and / or in several regions. In some embodiments, the virtual network is an overlay network that spans multiple public clouds to interconnect one or more private networks (for example, networks within branches, divisions, entities' departments or their associated data centers), mobile users and machines providing SaaS (Software as a service) and other network applications of the entity. The virtual network, in some modalities, can be configured to optimize the routing of the entity's data messages to their destinations to obtain better performance, reliability and security from end to end, while trying to minimize the routing of this traffic through the Internet. In addition, the virtual network in some modalities can be configured to optimize the layer 4 processing of data message flows that pass through the network. 公开号:BR112020006724A2 申请号:R112020006724-5 申请日:2018-10-01 公开日:2020-10-06 发明作者:Israel Cidon;Chen Dar;Prashanth Venugopal;Eyal Zohar;Alex Markuze;Aran Bergman 申请人:Vmware, Inc.; IPC主号:
专利说明:
[0001] [0001] Today, a corporate network is the main communication structure that securely connects the different offices and divisions of a company. This network is typically a wide area network (WAN) that connects (1) users at regional branches and campuses, (2) corporate data centers that host business applications, Intranets and their corresponding data, and (3) the global Internet via through corporate firewalls and DMZ (demilitarized zone). Corporate networks include specialized hardware, such as switches, routers, and middle box devices interconnected by expensive leased lines, such as Frame Relay and MPLS (multiprotocol label switching). [0002] [0002] In recent years, there has been a paradigm shift in the way companies serve and consume communication services. First, the mobility revolution has enabled users to access services from anywhere and at any time using mobile devices, especially smartphones. These users access business services through the public Internet and cellular networks. At the same time, third-party SaaS (Software as a Service) providers (for example, Salesforce, Workday, Zendesk) have replaced traditional local applications, while other applications hosted in private data centers have been relocated to public clouds. Although this traffic is still carried within the corporate network, a significant portion of it originates and ends outside the perimeters of the corporate network and must cross the public Internet (once or twice) and the corporate network. Recent studies have shown that 40% of corporate networks report that the percentage of backhaul traffic (ie, Internet traffic observed on the corporate network) is above 80%. This means that most of the corporate traffic is carried over expensive leased lines and the consumer Internet. [0003] [0003] As a consumer-centered service, the Internet itself is a bad medium for commercial traffic. It lacks the reliability, QoS (quality of service) guarantees and security expected by critical business applications. In addition, increasing consumer traffic demands, network neutrality regulations and the creation of diversions of the Internet by large players (for example, Netflix, Google, public clouds) have reduced the monetary return per unit of traffic. These trends have reduced service providers' incentives to respond quickly to consumer demands and offer appropriate business services. [0004] [0004] Given the growth of public clouds, companies are migrating more of their computing infrastructure to public cloud data centers. Public cloud providers are at the forefront of investing in computing and network infrastructure. These cloud services created many data centers around the world, with Azure, AWS, IBM and Google expanding to 38, 16, 25 and 14 regions worldwide, respectively, in 2016. Each public cloud provider interconnected its own data centers using expensive high-speed networks that employ dark fiber and submarine cables deployed by submarines. [0005] [0005] Today, despite these changes, corporate network policies generally force all corporate traffic to pass through its secure WAN communication ports. As users become mobile and applications migrate to SaaS and public clouds, corporate WANs become costly bypasses that slow down all corporate communications. Most of the corporate WAN traffic originates from or destined for the Internet. Alternative secure solutions that send this traffic over the Internet are not suitable due to poor and unreliable performance. BRIEF SUMMARY [0006] [0006] Some modalities establish as an entity a virtual network through several public cloud data centers of one or more public cloud providers in one or more regions (for example, several cities, states, countries, etc.). An example of an entity for which a virtual network can be established includes a business entity (for example, a company), a non-profit entity (for example, a hospital, a research organization, etc.) and an educational entity ( for example, university, college, etc.) or any other type of entity. Examples of public cloud providers include Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, etc. [0007] [0007] In some modalities, reliable and high-speed private networks interconnect two or more of the public cloud data centers (public clouds). Some modalities define the virtual network as an overlay network that spans multiple public clouds to interconnect one or more private networks (for example, networks within branches, divisions, entities' departments or their associated data centers), mobile users, machines providing SaaS (Software as a Service), machines and / or services in the public cloud (or clouds) and other network applications. [0008] [0008] The virtual network, in some modalities, can be configured to optimize the routing of the entity's data messages to their destinations to obtain better performance, reliability and end-to-end security, while trying to minimize the routing of this traffic over the Internet . In addition, the virtual network in some modalities can be configured to optimize the layer 4 processing of data message flows that pass through the network. For example, in some modalities, the virtual network optimizes the end-to-end rate of TCP (Transport Control Protocol) connections by dividing the rate control mechanisms in the connection path. [0009] [0009] Some modalities establish the virtual network by configuring several components that are deployed in several public clouds. These components include, in some embodiments, software-based measurement agents, software routing elements (for example, software routers, switches, communication ports, etc.), layer 4 connection proxies and service machines. intermediate box (for example, devices, VMs, containers, etc.) One or more of these components in some modalities use standardized or commonly available solutions, such as Open vSwitch, OpenVPN, strongSwan and Ryu. [0010] [0010] Some modalities use a logically centralized cluster of controllers (for example, a set of one or more controller servers) that configure the components of the public cloud to implement the virtual network in several public clouds. In some embodiments, the controllers in this cluster are in several different locations (for example, in different public cloud data centers), in order to improve redundancy and high availability. The controller cluster, in some modalities, increases or decreases the number of public cloud components used to establish the virtual network or the computing or network resources allocated to those components. [0011] [0011] Some modalities establish different virtual networks for different entities in the same set of public clouds from the same public cloud providers and / or in different sets of public clouds from the same or different public cloud providers. In some embodiments, a virtual network provider provides software and services that allow different tenants to define different virtual networks on the same or different public clouds. In some embodiments, the same controller cluster or different controller clusters can be used to configure the components of the public cloud to implement different virtual networks over the same or different sets of public clouds for several different entities. [0012] [0012] To deploy a virtual network for a tenant in one or more public clouds, the controller cluster (1) identifies possible inbound and outbound routers for inbound and outbound virtual network for the lessee based on branch locations, centers data and mobile users of the tenant, and SaaS providers, and (2) identifies routes that pass from the identified incoming routers to the identified outgoing routers through other intermediary public cloud routers that implement the virtual network. After identifying these routes, the controller cluster propagates these routes to the routing tables of the virtual network routers in the public cloud (or clouds). In modalities that use virtual network routers based on OVS, the controller distributes the routes using OpenFlow. [0013] [0013] The previous Summary should serve as a brief introduction to some modalities of the invention. It should not be an introduction or overview of the entire inventive subject disclosed in this document. The following Detailed Description and the Drawings mentioned in the Detailed Description will further describe the modalities described in the Summary, as well as other modalities. Thus, to understand all the modalities described by this document, a complete analysis of the Summary, Detailed Description, Drawings and Claims is necessary. In addition, the subjects claimed should not be limited by the illustrative details in the Summary, Detailed Description and Drawing. BRIEF DESCRIPTION OF THE DRAWINGS [0014] [0014] The new features of the invention are presented in the attached claims. However, for the sake of explanation, various embodiments of the invention are shown in the figures below. [0015] [0015] Figure 1A shows a virtual network that is defined for a company across multiple public cloud data centers from two public cloud providers. [0016] [0016] Figure 1B illustrates an example of two virtual networks for two corporate tenants deployed in public clouds. [0017] [0017] Figure 1C illustrates, alternatively, an example of two virtual networks, with one network deployed in public clouds and the other virtual network deployed in another pair of public clouds. [0018] [0018] Figure 2 illustrates an example of a managed routing node and a controller cluster of some modalities of the invention. [0019] [0019] Figure 3 illustrates an example of a measurement graph that the controller measurement processing layer produces in some modalities. [0020] [0020] Figure 4A illustrates an example of a routing graph that the controller path identification layer produces in some modalities of the measurement graph. [0021] [0021] Figure 4B illustrates an example of adding known IPs for two SaaS providers to the two nodes in the routing graph that are in the data centers closest to the data centers of those SaaS providers. [0022] [0022] Figure 4C illustrates a routing graph that is generated by adding two nodes to represent two SaaS providers. [0023] [0023] Figure 4D illustrates a routing graph with additional nodes added to represent branches and data centers with known | P addresses that connect respectively to two public clouds. [0024] [0024] Figure 5 illustrates a process that the controller path identification layer uses to generate a routing graph from a measurement graph received from the controller measurement layer. [0025] [0025] Figure 6 illustrates the IPsec data message format for some modalities. [0026] [0026] Figure 7 illustrates an example of the two encapsulation headers of some modalities, while Figure 8 presents an example that illustrates how these two headers are used in some modalities. [0027] [0027] Figures 9 to 11 illustrate the message handling processes that are performed respectively by the incoming, intermediate and outgoing MFNs when they receive a message that is sent between two computing devices at two different branches. [0028] [0028] Figure 12 illustrates an example that does not involve an MFN intermediate between the incoming and outgoing MFNs. [0029] [0029] Figure 13 illustrates a message handling process that is performed by the incoming MFN CFE when it receives a message sent from a corporate computing device at a branch to another device at another branch or at a provider data center SaaS. [0030] [0030] Figure 14 illustrates the NAT operation being performed on the outgoing router. [0031] [0031] Figure 15 illustrates a message manipulation process that is performed by the incoming router that receives a message that is sent from a SaaS provider machine to a tenant machine. [0032] [0032] Figure 16 illustrates these TM mechanisms that are placed on each virtual network communication port that is on the way out of the virtual network to the Internet. [0033] [0033] Figure 17 illustrates a dual NAT approach that is used in some modalities, instead of the single NAT approach illustrated in Figure 16. [0034] [0034] Figure 18 presents an example that illustrates the source port translation of the incoming NAT mechanism. [0035] [0035] Figure 19 illustrates the processing of a response message that a SaaS machine sends in response to the processing of a data message in Figure 18. [0036] [0036] Figure 20 shows an example showing M virtual corporate WANs for M tenants of a virtual network provider that has network infrastructure and controller cluster (or clusters) in N public clouds from one or more public cloud providers. [0037] [0037] Figure 21 illustrates conceptually a process performed by the controller cluster of the virtual network provider to deploy and manage a virtual WAN for a specific tenant. [0038] [0038] Figure 22 illustrates conceptually a computer system with which some modalities of the invention are implemented. DETAILED DESCRIPTION [0039] [0039] In the detailed description of the invention below, numerous details, examples and modalities of the invention are established and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the established modalities and that the invention can be practiced without some of the specific details and examples discussed. [0040] [0040] Some modalities establish as an entity a virtual network through several public cloud data centers of one or more public cloud providers in one or more regions (for example, several cities, states, countries, etc.). An example of an entity for which a virtual network can be established includes a business entity (for example, a company), a non-profit entity (for example, a hospital, a research organization, etc.) and an educational entity ( for example, university, college, etc.) or any other type of entity. Examples of public cloud providers include Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, etc. [0041] [0041] Some modalities define the virtual network as an overlay network that extends through several public cloud data centers (public clouds) to interconnect one or more private networks (for example, networks within branches, divisions, departments of the entity or their associated data centers), mobile users, machines providing SaaS (Software as a Service), machines and / or service in the public cloud (or clouds) and other network applications. In some modalities, reliable, high-speed private networks interconnect two or more of the public cloud data centers. [0042] [0042] The virtual network, in some modalities, can be configured to optimize the routing of the entity's data messages to their destinations to obtain better performance, reliability and security from end to end, while trying to minimize the routing of this traffic through the Internet . In addition, the virtual network in some modalities can be configured to optimize the layer 4 processing of data message flows that pass through the network. For example, in some modalities, the virtual network optimizes the end-to-end rate of TCP (Transport Control Protocol) connections by dividing the rate control mechanisms in the connection path. [0043] [0043] Some modalities establish the virtual network by configuring several components that are deployed in several public clouds. These components include, in some embodiments, software-based measurement agents, software routing elements (for example, software routers, switches, communication ports, etc.), layer 4 connection proxies and service machines. intermediate box (for example, devices, VMs, containers, etc.) [0044] [0044] Some modalities use a logically centralized cluster of controllers (for example, a set of one or more controller servers) that configure the components of the public cloud to implement the virtual network in several public clouds. In some embodiments, the controllers in this cluster are in several different locations (for example, in different public cloud data centers), in order to improve redundancy and high availability. When different controllers in the controller cluster are located in different public cloud data centers, controllers in some ways share their state (for example, the configuration data they generate to identify tenants, routes over virtual networks, etc.) . The controller cluster, in some modalities, increases or decreases the number of public cloud components used to establish the virtual network or the computing or network resources allocated to those components. [0045] [0045] Some modalities establish different virtual networks for different entities in the same set of public clouds from the same public cloud providers and / or in different sets of public clouds from the same or different public cloud providers. In some embodiments, a virtual network provider provides software and services that allow different tenants to define different virtual networks on the same or different public clouds. In some embodiments, the same controller cluster or different controller clusters can be used to configure the components of the public cloud to implement different virtual networks over the same or different sets of public clouds for several different entities. [0046] [0046] Several examples of corporate virtual networks are provided in the discussion below. However, one skilled in the art will realize that some modalities define virtual networks for other types of entities, such as other commercial entities, non-profit organizations, educational entities, etc. In addition, as used in this document, data messages refer to a collection of bits in a specific format sent over a network. One skilled in the art will recognize that the term data message is used in this document to refer to various formatted collections of bits that are sent over a network. The formatting of these bits can be specified by standard or non-standard protocols. Examples of data messages that follow standardized protocols include Ethernet frames, P | packets, TCP segments, UDP datagrams, etc. In addition, as used in this document, references to layers L2, L3, L4 and L7 (or layer 2, layer 3, layer 4 and layer 7) are references, respectively, to the second data link layer, the third layer of data. network, the fourth transport layer and the seventh application layer of the OSI layer model (Open System Interconnection). [0047] [0047] Figure 1A shows a virtual network 100 that is defined for a company across multiple public cloud data centers 105 and 110 from two public cloud providers A and B. As shown, virtual network 100 is an overlay network security that is established by deploying different managed routing nodes 150 in different public clouds and connecting managed routing nodes (MFNs) to each other through overlapping tunnels 152. In some embodiments, an MFN is a conceptual grouping of several components different in a public cloud data center that, with other MFNs (with other groups of components) in other public cloud data centers, establishes one or more virtual overlapping networks for one or more entities. [0048] [0048] As described below, the group of components that make up an MFN includes in some modalities (1) one or more VPN communication ports to establish VPN connections with an entity's computing nodes (for example, offices, data centers private users, remote users, etc.) that are external machine locations outside the public cloud data centers, (2) one or more routing elements for forwarding encapsulated data messages together in order to define a virtual network overlap in the network mesh of the shared public cloud, (3) one or more service machines to perform middlebox service operations, in addition to L4-L7 optimizations and (4) one or more measurement agents for obtain measurements related to the quality of the network connection between the public cloud data centers, in order to identify the paths desired by the public cloud data centers. In some embodiments, different MFNs may have different arrangements and different numbers of such components, and an MFN may have different numbers of these components for reasons of redundancy and scalability. [0049] [0049] In addition, in some embodiments, the group of components for each MFN runs on different computers in the MFN's public cloud data center. In some embodiments, several or all of the components of an MFN can be run on a computer in a public cloud data center. The components of an MFN in some embodiments are run on host computers that also run other machines from other tenants. These other machines can be other machines from other MFNs from other tenants, or they can be unrelated machines from other tenants (for example, computing VMs or containers). [0050] [0050] Virtual network 100 in some modalities is deployed by a virtual network provider (VNP) that deploys different virtual networks in the same or different public cloud data centers for different entities (for example, different corporate clients / tenants from the provider virtual network). The virtual network provider in some modalities is the entity that deploys the MFNs and provides the controller cluster to configure and manage those MFNs. [0051] [0051] Virtual network 100 connects corporate computing endpoints (such as data centers, branches and mobile users) with each other and with external services (for example, public web services or SaaS services like Office365 or Salesforce) that reside in the public cloud or reside in a private data center accessible via the Internet. This virtual network uses the different locations of different public clouds to connect different corporate computing endpoints (for example, different private networks and / or different mobile users of the company) to nearby public clouds. Enterprise computing endpoints are also referred to as enterprise computing nodes in the discussion below. [0052] [0052] In some modalities, the virtual network 100 also takes advantage of the high-speed networks that interconnect these public clouds to route data messages through the public clouds to their destinations or to get closer to their destinations while reducing their passage over the Internet. When enterprise computing endpoints are outside the public cloud data centers into which the virtual network extends, these endpoints are called external machine locations. This is the case with corporate branches, private data centers and remote user devices. [0053] [0053] In the example illustrated in Figure 1A, virtual network 100 comprises six data centers 105a to 105f from public cloud provider A and four data centers 110a to 110d from public cloud provider B. As it extends through these public clouds , this virtual network connects multiple branches, corporate data centers, SaaS providers and tenant - corporate mobile users located in different geographic regions. Specifically, virtual network 100 connects two branches 130a and 130b in two different cities (for example, San Francisco, California and Pune, India), a corporate data center 134 in another city (for example, Seattle, Washington), two centers SaaS provider data 136a and 136b in two other cities (Redmond, Washington and Paris, France) and mobile users 140 in various locations around the world. As such, this virtual network can be seen as a virtual corporate WAN. [0054] [0054] In some embodiments, branches 130a and 130b have their own private networks (for example, local area networks) that connect computers at branch locations and private branch data centers that are outside the public clouds. Likewise, the corporate data center 134 in some modalities has its own private network and resides outside any public cloud data center. In other embodiments, however, corporate data center 134 or branch office data center 130a and 130b may be within a public cloud, but the virtual network does not cover that public cloud, as the corporate or branch data center connects to the edge of virtual network 100. [0055] [0055] As mentioned above, virtual network 100 is established by connecting different managed routing nodes 150 deployed in different public clouds through overlapping tunnels 152. Each managed routing node 150 includes several configurable components. As described above and described below, MFN components include, in some embodiments, software-based metering agents, software routing elements (for example, software routers, switches, communication ports, etc.), network proxies layer 4 (for example, TCP proxies) and middle box service machines (for example, VMs, containers, etc.). One or more of these components in some modalities use standardized or commonly available solutions, such as Open vSwitch, OpenVPN, strongSwan, etc. [0056] [0056] In some modalities, each MFN (that is, the group of components that make up an MFN) can be shared by different tenants of the virtual network provider that deploys and configures MFNs in public cloud data centers. Together or alternatively, the virtual network provider in some modalities may deploy an exclusive set of MFNs in one or more public cloud data centers for a specific tenant. For example, a specific tenant may not want to share MFN resources with another tenant for reasons of security or quality of service. For this tenant, the virtual network provider can deploy its own set of MFNs across multiple public cloud data centers. [0057] [0057] In some embodiments, a logically centralized controller cluster 160 (for example, a set of one or more controller servers) operates inside or outside one or more of the public clouds 105 and 110 and configures the public cloud components of the managed routing nodes 150 to implement the virtual network over public clouds 105 and 110. In some embodiments, the controllers in this cluster are in several different locations (for example, in different public cloud data centers) in order to improve the redundancy and high availability. The controller cluster, in some modalities, increases or decreases the number of public cloud components used to establish the virtual network or the computing or network resources allocated to those components. [0058] [0058] In some embodiments, controller cluster 160, or another controller cluster of the virtual network provider, establishes a different virtual network for another corporate tenant in the same public clouds 105 and 110 and / or in different public clouds from different providers public cloud. In addition to the controller cluster, the virtual network provider in other modalities implements routing elements and service machines in public clouds that allow different tenants to implement different virtual networks in the same or different public clouds. Figure 1B illustrates an example of two virtual networks 100 and 180 for two corporate tenants deployed in public clouds 105 and 110. Figure 1C illustrates, alternatively, an example of two virtual networks 100 and 182, with a network 100 deployed over public clouds 105 and 110 and the other virtual network 182 deployed over another pair of public clouds 110 and 115. [0059] [0059] Through the configured components of MFNs, the virtual network 100 in Figure 1A allows different private networks and / or different mobile users of the corporate tenant to connect to different public clouds that are in ideal locations (for example, as measured in in terms of physical distance, in terms of speed, loss, delay and / or connection cost, and / or in terms of network connection reliability, etc.) with respect to these private networks and / or mobile users. These components also allow the virtual network 100, in some modalities, to use high-speed networks that interconnect public clouds to route data messages through public clouds to their destinations, while reducing their passage over the Internet. [0060] [0060] In some modalities, MFN components are also configured to run new processes at the network, transport and application layers to optimize performance, reliability and security from end to end. In some embodiments, one or more of these processes implement proprietary high-performance network protocols, free from the current ossification of the network protocol. As such, virtual network 100, in some modalities, is not confined by autonomous Internet systems, routing protocols or even end-to-end transport mechanisms. [0061] [0061] For example, in some modalities, the components of the MFNs 150 (1) create optimized centralized, multipath and adaptable routing, (2) provide strong guarantees of QoS (Quality of Service), (3) optimize rates TCP end-to-end through intermediate TCP split and / or termination and (4) relocate scalable middle box services at application level (eg firewalls, intrusion detection systems (IDS), intrusion prevention system (IPS), WAN optimization, etc.) for the cloud computing part of a global network function (NFV) virtualization. Consequently, the virtual network can be optimized to meet the customized and variable demands of the company, without being linked to the existing network protocol. In addition, in some modalities, the virtual network can be configured as a "baggage according to use" infrastructure that can be dynamically and elastically scaled up and down, both in terms of performance capacity and geographic extension, according to continuous changes in requirements. [0062] [0062] To implement virtual network 100, at least one managed routing node 150 in each public cloud data center 105a to 105f and 110a to 110d extended by the virtual network must be configured by the set of controllers. Figure 2 illustrates an example of a managed routing node 150 and a controller cluster 160 of some embodiments of the invention. In some embodiments, each managed forwarding node 150 is a machine (for example, a VM or container) that runs on a host computer in a public cloud data center. In other embodiments, each managed routing node 150 is implemented by multiple machines (for example, multiple VMs or containers) that run on the same host computer in a public cloud data center. In yet other modalities, two or more components of an MFN can be implemented by two or more machines running on two or more host computers in one or more public cloud data centers. [0063] [0063] As shown, managed routing node 150 includes a metering agent 205, firewall and NAT box service mechanisms 210 and 215, one or more optimization mechanisms 220, edge communication ports 225 and 230 and a cloud routing element 235 (for example, a cloud router). In some embodiments, each of these components 205 to 235 can be implemented as a cluster of two or more components. [0064] [0064] Controller cluster 160 in some embodiments can dynamically scale up or down each component cluster (1) to add or remove machines (for example, VMs or containers) to implement the functionality of each component and / or ( 2) to add or remove computing and / or network resources for previously deployed machines that implement the components of that cluster. Thus, each MFN 150 deployed in a public cloud data center can be viewed as a cluster of MFENs or as a node that includes several clusters of different components that perform different MFN operations. [0065] [0065] In addition, in some embodiments, the controller cluster deploys different sets of MFNs in public cloud data centers for different tenants for which the controller cluster defines virtual networks over public cloud data centers. In this approach, the virtual networks of two tenants do not share any MFN. However, in the modalities described below, each MFN can be used to implement different virtual networks for different tenants. Someone skilled in the art will realize that in other embodiments the controller cluster 160 can implement the virtual network of each tenant of a first set of tenants with its own dedicated set of MFNs implemented, while implementing the virtual network of each tenant of a second set of tenants. tenants with a shared set of MFNs deployed. [0066] [0066] In some embodiments, branch office communication port 225 and remote device communication port 230 establish secure VPN connections, respectively, with one or more branches 130 and remote devices (e.g. mobile devices 140) that connect to the MFN 150, as shown in Figure 2. An example of these VPN connections are IPsec connections, which will be described later. However, one skilled in the art will realize that in other modalities, these communication ports 225 and / or 230 establish different types of VPN connections. [0067] [0067] An MFN 150 in some modalities includes one or more intermediary mechanisms that perform one or more intermediary service operations, such as firewall operations, NAT operations, IPS operations, IDS operations, load balancing operations, operations WAN optimization, etc. By incorporating these middle box operations (for example, firewall operations, WAN optimization operations, etc.) into MFNs deployed in the public cloud, virtual network 100 implements many of the functions traditionally performed by the enterprise WAN infrastructure in the public cloud. data center (or centers) of the company and / or branch (or branches). [0068] [0068] Thus, for many of the middle box services, corporate computing nodes (for example, remote devices, branches and data centers) no longer need to access the company's corporate WAN infrastructure at a particular data center or branch , as many of these services are now deployed in public clouds. This approach speeds the access of corporate computing nodes (for example, remote devices, branch offices and data centers) to these services and avoids costly and congested network bottlenecks in private data centers that would otherwise be dedicated to offering these services. services. [0069] [0069] This approach effectively distributes the functionality of the WAN communication port to multiple MFNs in public cloud data centers. For example, in virtual network 100 of some modalities, most or all of the traditional security functions of the corporate WAN communication port (for example, firewall operations, intrusion detection operations, intrusion prevention operations, etc.) are moved to public cloud MFNs (for example, incoming MFNs where data from computing endpoints is received on the virtual network). This effectively allows virtual network 100 to have a distributed WAN communication port that is implemented on many different MFNs that implement virtual network 100. [0070] [0070] In the example illustrated in Figure 2, MEN 150 is shown to include firewall mechanism 210, NAT mechanism 215 and one or more optimization mechanisms L4 to L7. One skilled in the art will realize that in other embodiments, the MFN 150 includes other intermediate gear mechanisms to perform other intermediate gear operations. In some embodiments, the firewall mechanism 210 imposes firewall rules on (1) data message flows in its virtual network entry paths (for example, on data message flows that the communication ports [0071] [0071] The firewall mechanism 210 of the MFN 150 in some modalities also applies firewall rules when the firewall mechanism belongs to an MFN which is an intermediate jump between an incoming MFN in which a data message flow enters a network virtual and an outgoing MFN in which the data message flow exits the virtual network. In other embodiments, the firewall mechanism 210 imposes firewall rules only when it forms part of the incoming and / or outgoing MFN of a data message flow. [0072] [0072] In some embodiments, the NAT 215 engine performs a network address translation to change the source network addresses of data message flows in their outbound paths from the virtual network to third-party devices (for example, to machines SaaS providers) over the Internet 202. These network address translations ensure that third party machines (for example, SaaS machines) can be configured correctly to process data message flows that, without address translations, can specify addresses private network services of tenants and / or public cloud providers. This is particularly problematic, as the private network addresses of different tenants and / or cloud providers can overlap. Address translation also ensures that response messages from third-party devices (for example, SaaS machines) can be properly received by the virtual network (for example, by the NAT MFN mechanism from which the message left the virtual network). [0073] [0073] The NAT 215 mechanisms of MFNs in some modalities perform double NAT operations on each data message flow that leaves the virtual network to reach a third party machine or that enters the virtual network from a third party machine. As described below, one NAT operation on both NAT operations is performed on the data message stream on your incoming MFN when it enters the virtual network, while the other NAT operation is performed on the data message stream on your outgoing MFN when it leaves the virtual network. [0074] [0074] This dual NAT approach allows private networks of more tenants to be mapped to the networks of public cloud providers. This approach also reduces the burden of distributing MFN data in relation to changes in private tenant networks. Prior to inbound or outbound NAT operations, some modalities perform a tenant mapping operation that uses the tenant identifier to first map the tenant's source network address to another source network address that is mapped to another network address by the NAT operation. The execution of the dual NAT operation reduces the data distribution burden for data distribution related to changes in the lessee's private networks. [0075] [0075] The optimization mechanism 220 performs new processes that optimize the forwarding of the entity's data messages to their destinations for better performance and end-to-end reliability. Some of these processes implement proprietary high-performance network protocols, free from the current ossification of the network protocol. For example, in some embodiments, the optimization mechanism 220 optimizes end-to-end TCP rates through intermediate TCP splitting and / or termination. [0076] [0076] Cloud forwarding element 235 is the MFN mechanism responsible for forwarding a stream of data messages to the next-hop MFN cloud forwarding element (CFE) when the data message stream needs to pass to another public cloud to reach its destination or to an outgoing router in the same public cloud when the data message flow can reach its destination through the same public cloud. In some embodiments, the CFE 235 of the MFN 150 is a software router. [0077] [0077] To forward data messages, CFE encapsulates messages with tunnel headers. Different modalities use different approaches to encapsulate data messages with tunnel headers. Some modalities described below use a tunnel header to identify network entry / exit addresses to enter and exit the virtual network and use another tunnel header to identify next hop MFNs when a data message needs to pass through one or more intermediate MFNs to reach the outgoing MFN. [0078] [0078] Specifically, in some modalities, the CFE sends the data message with two tunnel headers (1) an internal header that identifies an incoming CFE and an outgoing CFE to enter and exit the virtual network and (2) a header external identifier that identifies the next hop CFE. The internal tunnel header in some embodiments also includes a tenant identifier (TID) to allow several different tenants from the virtual network provider to use a common set of MFN CFEs from the virtual network provider. Other modalities define tunnel headings differently to define the virtual overlay network. [0079] [0079] To deploy a virtual network for a tenant in one or more public clouds, the controller cluster (1) identifies possible inbound and outbound routers for inbound and outbound virtual network for the tenant based on the locations of the computing nodes tenant's corporate assets (for example, branch offices, data centers, mobile users and SaaS providers) and (2) identifies routes that pass from the identified incoming routers to the outgoing routers identified by other intermediate public cloud routers that implement the network virtual. After identifying these routes, the controller cluster propagates these routes to the MFN 235 CFEs forwarding tables in the public cloud (or clouds). In modalities that use virtual network routers based on OVS, the controller distributes the routes using OpenFlow. [0080] [0080] In some embodiments, controller cluster 160 can also configure components 205 to 235 of each MFN 150 that implements the virtual network to optimize various layers of network processing in order to achieve better performance, reliability and end-to-end security tip. For example, in some embodiments, these components are configured (1) to optimize layer 3 traffic routing (eg, shortest path, packet duplication), (2) to optimize layer 4 TCP congestion control (eg (eg segmentation, rate control), (3) to implement security features (for example, encryption, deep packet inspection, firewall) and (4) to implement application layer compression features (for example, deduplication, storage cached). Within the virtual network, corporate traffic is protected, inspected and recorded. [0081] [0081] In some modalities, a measurement agent is deployed for each MFN in a public cloud data center. In other modalities, multiple MFNs in a public cloud data center or in a collection of data centers (for example, in a collection of nearby, associated data centers, such as data centers in an availability zone) share an agent measuring To optimize the processing of layers 3 and 4, the measuring agent 205 associated with each managed routing node 150 repeatedly generates measurement values that quantify the quality of the network connection between its node and each of several other "neighboring" nodes. [0082] [0082] Different modalities define neighboring nodes differently. For a specific MFN in a public cloud data center for a specific public cloud provider, a neighboring node in some ways includes (1) any other MFN operating in any public cloud data center for the specific public cloud provider and [0083] [0083] Different modalities define the same region differently. For example, some modalities define a region in terms of a distance that specifies a bounding shape around the specific managed routing node. Other modalities define regions in terms of cities, states or regional areas, such as Northern California, Southern California, etc. The assumption of this approach is that different data centers from the same public cloud provider are connected with very high-speed network connections, while network connections between data centers from different public cloud providers are likely to be fast when data centers data are in the same region, but are probably not as fast when data centers are in different regions. The connection between data centers from different public cloud providers may have to travel long distances over the public Internet when the data centers are in different regions. [0084] [0084] Measuring agent 205 generates measurement values differently in different modalities. In some embodiments, the metering agent sends ping messages (for example, UDP echo messages) periodically (for example, once every second, every N seconds, every minute, every M minutes, etc.) to each of the measurement agents of its neighboring managed forwarding nodes. Given the small size of ping messages, they do not result in large network connection charges. For example, for 100 nodes with each node pinging the other every 10 seconds, about 10 Kb / s of inbound and outbound metering traffic is generated for each node, which leads to network consumption charges. a few dollars (for example, $ 5) per node per year, given the current public cloud prices. [0085] [0085] Based on the speed of the response messages it receives, the measurement agent 205 calculates and updates the values of the measurement metrics, such as transfer speed of the network connection, delay, loss and link reliability. When performing these operations repeatedly, measuring agent 205 defines and updates a measurement results matrix that expresses the quality of network connections to its neighboring nodes. As agent 205 interacts with the measurement agents of its neighboring nodes, its measurement matrix quantifies only the quality of the connections with its local node cluster. [0086] [0086] The measurement agents of the different managed routing nodes send their measurement matrices to the controller cluster 160, which aggregates all the different plug connection data to obtain an aggregated mesh view of the connections between different pairs of managed forwarding. When controller cluster 160 collects different measurements for a link between two pairs of forwarding nodes (for example, measurements taken by a node at different times), the controller cluster produces a combined value from the different measurements (for example, it produces a average or a weighted average of measurements). The aggregate mesh view in some modalities is a complete mesh view of all network connections between each pair of managed routing nodes, while in other modalities it is a more complete view than that produced by the measurement agents of the routing nodes individual managed. [0087] [0087] As shown in Figure 2, controller cluster 160 includes a cluster of one or more measurement processing mechanisms 280, one or more path identification mechanisms 282 and one or more management interfaces 284. Not to hide the description with unnecessary details, each of these clusters will be referred to below in terms of unique layers of mechanism or interface, that is, in terms of a measurement processing layer 280, a path identification layer 282 and a management interface layer 284. [0088] [0088] The measurement processing layer 280 receives the measurement matrices from the measurement agents 205 of the managed routing nodes and processes these measurement matrices to produce the aggregate mesh matrix that expresses the quality of the connection between different pairs of node nodes. managed forwarding. The measurement processing layer 280 provides the aggregate mesh matrix to the path identification layer 282. Based on the aggregate mesh matrix, the path identification layer 282 identifies different desired routing paths through the virtual network to connect different points enterprise data endpoints (for example, different branch offices, corporate data centers, SaaS vendor data centers, and / or remote devices). This layer 282 provides these routing paths in route tables that are distributed to the cloud forwarding elements 235 of the managed forwarding nodes 150. [0089] [0089] In some embodiments, the routing path identified for each pair of data message endpoints is a routing path considered ideal based on a set of optimization criteria, for example, it is the fastest routing path , the shortest routing path, or the path that uses the Internet least. In other embodiments, the path identification mechanism can identify and provide (in the routing table) several different routing paths between the same two endpoints. In these modalities, the cloud forwarding elements 235 of the managed forwarding nodes 150 select one of the paths based on the QoS criteria or other runtime criteria that they are applying. Each CFE 235 in some modalities does not receive the entire routing path from the CFE to the exit point of the virtual network, but receives the next hop for the path. [0090] [0090] In some embodiments, the path identification layer 282 uses the measurement values in the aggregate mesh matrix as inputs to the routing algorithms it performs to build a global routing graph. This global routing graph is an aggregated and optimized version of a measurement graph that the measurement processing layer 280 produces in some modalities. Figure 3 illustrates an example of a measurement graph 300 that the controller measurement processing layer 280 produces in some embodiments. This graph shows network connections between multiple managed routing nodes 150 in the AWS and GCP public clouds 310 and 320 (that is, in the AWS and GCP data centers). Figure 4A illustrates an example of a routing graph 400 that the controller path identification layer 282 produces in some modalities of measurement graph 300. [0091] [0091] Figure 5 illustrates a process 500 that the controller path identification layer uses to generate a routing graph from a measurement graph received from the controller measurement layer. Path identification layer 282 performs this process 500 repeatedly, as it repeatedly receives updated measurement graphs from the controller measurement layer (for example, executes process 500 each time it receives a new measurement graph or the Nth time you receive a new measurement graph measurement graph). In other embodiments, the path identification layer 282 performs this process periodically (for example, once every 12 hours or 24 hours). [0092] [0092] As shown, the path identification layer initially defines (at 505) the routing graph to be identical to the measurement graph (ie, to have the same links between the same pairs of managed routing nodes). In 510, the process removes bad links from the measurement graph 300. Examples of incorrect links are links with excessive message loss or low reliability (for example, links with a loss greater than 2% in the last 15 minutes or a loss greater than 10% in the last 2 minutes). Figure 4A illustrates that links 302, 304 and 306 on measurement graph 300 are excluded on the routing graph [0093] [0093] Then, in 515, process 500 calculates a link weight score (cost score) as a weighted combination of various provider-specific and computed values. In some modalities, the weight score is a weighted combination of (1) the calculated link delay value, (2) the calculated loss value, (3) the provider's network connection cost and (4) the provider's computing cost. . In some modalities, the provider's computing cost is accounted for, as the managed routing nodes connected by the link are machines (for example, VMs or containers) that run on host computers in the public cloud data center (or centers). [0094] [0094] In 520, the process adds to the routing graph the known source and destination | P addresses (for example, known IPs of SaaS providers used by the corporate entity) for data message flows in the virtual network. In some embodiments, the process adds each known IP address of a possible end point of the message flow to the node (for example, the node representing an MFN) in the routing graph closest to that end point. In doing so, the process in some modalities assumes that each of these endpoints is connected to the virtual network through a link with a zero delay cost and a zero loss cost. Figure 4B illustrates an example of adding known IPs for two SaaS providers to both nodes 402 and 404 (representing two MFNs) in the routing chart that is in the data centers closest to the data centers of those SaaS providers. In this example, one node is in an AWS public cloud, while the other node is in the public GCP cloud. [0095] [0095] Alternatively, or jointly, process 500 in some modalities adds the known source and destination IP addresses to the routing graph by adding nodes to this graph to represent the source and destination endpoints, assigning IP addresses to those nodes and assigning weight values to the links that connect these nodes added to other nodes in the routing graph (for example, nodes in the routing graph that represent MFNs in public clouds). When source and destination endpoints for flows are added as nodes, the path identification mechanism 282 can account for the cost (for example, distance cost, delay cost and / or financial cost, etc.) of achieving these nodes when identifying different routes through the virtual network between different source and destination endpoints. [0096] [0096] Figure 4C illustrates a routing graph 410 that is generated by adding two nodes 412 and 414 to the node graph 400 of Figure 4A, in order to represent two SaaS providers. In this example, the known IP addresses are assigned to nodes 412 and 414, and these nodes are connected to nodes 402 and 404 (representing two MFNs) through links 416 and 418 that have weights W1 and W2 assigned to them. This approach is an alternative approach for adding the known IP addresses of the two SaaS providers to the approach illustrated in Figure 4B. [0097] [0097] Figure 4D illustrates a more detailed routing graph 415. In this more detailed routing graph, additional nodes 422 and 424 are added to represent external corporate computing nodes (for example, branches and data centers) with known IP addresses that connect, respectively, to the public clouds AWS and GCP 310 and 320. Each of these nodes 422/424 is connected by at least one link 426 with a weight value associated with Wi to at least one of the nodes of the routing graph that represents an MFN. Some of these nodes (for example, some of the branches) are connected with several links to the same MFN or to different MFNs. [0098] [0098] Then, at 525, process 500 calculates the least cost paths (for example, shortest paths, etc.) between each MFN and another MFN that can serve as a virtual network exit location for a message flow corporate entity data. Outbound MFNs in some modalities include MFNs connected to external corporate computing nodes (for example, branch offices, corporate data centers and SaaS provider data centers), as well as MFNs that are candidate locations for mobile device connections and connections outgoing Internet. In some embodiments, this calculation uses a traditional low-cost identification process (for example, shortest path) that identifies the shortest paths between different MFN pairs. [0099] [0099] For each candidate MFN pair, the lowest cost identification process uses the calculated weight scores (ie, the scores calculated at 510) to identify a path with the lowest score when there are multiple paths between the MFN pair . Several ways of calculating lower cost paths will be described later. As mentioned above, the path identification layer 282 identifies multiple paths between two pairs of MFN in some embodiments. This allows the cloud forwarding elements 235 to use different paths under different circumstances. Therefore, in these embodiments, process 500 can identify multiple paths between two pairs of MFN. [00100] [00100] In 530, the process removes links between MFN pairs from the routing graph that are not used by any of the lowest cost paths identified in 525. Then, in 535, the process generates the routing tables for the elements 235 cloud forwarding from the routing graph. At 535, the process distributes these routing tables to the cloud forwarding elements 235 of the managed forwarding nodes. After 535, the process ends. [00101] [00101] In some modalities, the virtual network has two types of external connections, which are: (1) secure external connections with the computing nodes (for example, branches, data centers, mobile users, etc.) of an entity and (2) external connections to third-party computers (for example, SaaS provider servers) over the Internet. Some modalities optimize the virtual network, locating ideal locations for entering and leaving the virtual network for each data path that ends at the source and destination nodes outside the virtual network. For example, to connect a branch office to a SaaS provider server (for example, salesforce.com server), some modalities connect the branch to an ideal edge MFN (for example, the MFN that has the fastest network connection to the branch or the one closest to the branch office) and identify an ideal edge MFN for an ideally located SaaS provider server (for example, the SaaS that is closest to the branch edge MFN or has the fastest path to the MFN to the branch server via the edge MFN connected to the SaaS provider's server). [00102] [00102] To associate each computing node (for example, a branch, a mobile user, etc.) of an entity to the nearest MFN through a VPN connection, the virtual network provider in some modalities implements one or more servers of domain name (DNS) authorized in public clouds for the computing nodes to contact. In some modalities, whenever a corporate computing node in some modalities needs to establish a VPN connection (that is, to initialize or reinitialize the VPN connection) with an MFN from the virtual network provider, the computing node first resolves an address associated with the your virtual network (for example, virtualnetworkX.net) with that authoritative DNS server, in order to obtain from that server the MFN identity that that server identifies as the MFN closest to the corporate computing node. To identify that MFN, the authoritative DNS server provides an MFN identifier (for example, the MFN IP address) in some ways. The corporate computing node establishes a VPN connection to that managed forwarding node. [00103] [00103] In other modalities, the corporate computing node does not first execute a DNS resolution (that is, it does not first resolve a network address for a specific domain) each time it needs to establish a VPN connection with a VNP MFN. For example, in some embodiments, the corporate computing node adheres to a DNS-resolved MFN for a specific duration (for example, for a day, a week, etc.) before executing another DNS resolution to determine whether that MFN is still the ideal to connect to. [00104] [00104] When the source | P address in the DNS request is that of the corporate computing node's local DNS server, and not the node itself, the authoritative DNS server in some ways identifies the MFN closest to the local DNS server instead of the MFN closest to the corporate computing node. To resolve this, the DNS request in some ways identifies the corporate computing node in terms of a domain name that includes one or more parts (labels) that are concatenated and dot-delimited, where one of these parts identifies the company and the another part identifies the computing node of the corporation. [00105] [00105] In some embodiments, this domain name specifies a hierarchy of domains and subdomains that descend from the right label to the left label in the domain name. The first rightmost label identifies the specific domain, a second label to the left of the first label identifies the corporate entity and a third label to the left of the second label identifies the external machine location of the entity in cases where the entity has more than one external machine location. For example, in some modalities, the DNS request identifies the corporate computing node as myNode of the company my Company and requests the resolution of the address myNode.myCompany .virtualnetwork.net. The DNS server uses the myNode identifier to better select the incoming MFN to which the corporate computing node should establish a VPN connection. In different modalities, the myNode identifier is expressed differently. For example, it can be addressed as an IP address, a location's latitude / longitude description, a GPS (Global Positioning System) location, a street address, etc. [00106] [00106] Even when the address | P correctly reflects the location, there may be several potential incoming routers, for example, belonging to different data centers in the same cloud or to different clouds in the same region. In this case, the authorized server of the virtual network in some modalities sends back a list of potential MFN CFEs IPs (for example, C5, C8, 012). The corporate computing node, in some modalities, sends the different CFEs on the list to produce measurements (for example, distance or speed measurements) and selects the closest one comparing measurements among the set of CFE candidates. [00107] [00107] In addition, the corporate computing node can base this selection by identifying the MFNs currently used by the other corporate entity's computing nodes. For example, in some embodiments, the corporate computing node adds connection costs to each MFN, so that if many corporate branches are already connected to a particular cloud, new computing nodes would have an incentive to connect to the same cloud. thus minimizing costs between clouds in terms of processing, latency and dollars. [00108] [00108] Other modalities use other DNS resolution techniques. For example, whenever a corporate computing node (for example, a branch, data center, mobile user, etc.) needs to perform a DNS resolution, the corporate computing node (for example, the mobile device or a local DNS resolver in a branch office or data center) communicates with a DNS service provider that serves as an authorized DNS resolver for multiple entities. In some modalities, this DNS service provider has DNS resolution machines located in one or more private data centers, while in other modalities it is part of one or more public cloud data centers. [00109] [00109] To identify which of the N managed routing nodes that connect directly to the Internet should be used to access a SaaS provider server, the virtual network (for example, the incoming MFN or the controller cluster that configures the MFNs) in some modalities it identifies a set of one or more candidate edge MFNs from the N managed forwarding nodes. As described below, each candidate edge MFN in some modalities is an edge MFN considered ideal based on a set of criteria, such as distance to the SaaS provider's server, network connection speed, network connection speed, cost, delay and / or loss, network computing cost etc. [00110] [00110] To assist in identifying the ideal endpoints, the cluster of controllers of some modalities maintains for an entity a list of the most popular SaaS providers and consumer Web destinations and their IP address subnets. For each of these destinations, the controller cluster assigns one or more of the ideal MFNs (again judged by physical distance, network connection speed, cost, loss and / or delay, computational cost, etc.) as exit candidate nodes. For each candidate outgoing MFN, the controller cluster calculates the best possible route from each incoming MFN to the candidate MFN and sets up the resulting next hop table on the MFNs accordingly, so that the SaaS Internet provider or Web destination is associated with the next hop node of the correct virtual network. [00111] [00111] Since the service destination can generally be reached across multiple | P subnets in multiple locations (as provided by the authoritative DNS server), there are several potential output nodes to minimize latency and provide load balancing. Therefore, in some modalities, the controller cluster calculates the best location and output node for each MFN and updates the next hop accordingly. [00112] [00112] To identify the ideal path through the virtual network to an outgoing MFN that connects to the Internet or to a corporate computing node of the corporate entity, the controller cluster identifies the ideal routing paths between the MFNs. As mentioned above, the controller cluster in some ways identifies the best path between any two MFNs, first costing each link between a pair of directly connected MFNs, for example, based on a metric score that reflects the weighted sum of the estimated latency and financial costs. Financial and latency costs include in some modalities (1) link delay measurements, (2) estimated message processing latency, (3) cloud charges for outbound traffic from a given data center to another data center from the same public cloud provider or to leave the cloud from the public cloud provider (PC) (for example, to another public cloud data center from another public cloud provider or to the Internet) and (4) estimated processing costs of messages associated with MFNs running on host computers in public clouds. [00113] [00113] Using the calculated costs of these links in pairs, the controller cluster can calculate the cost of each routing path that uses one or more of these links in pairs, aggregating the costs of the links in individual pairs used by the routing path. As described above, the controller cluster defines its routing graph based on the calculated costs of the routing paths and generates the routing tables of the MFNs' cloud routers based on the defined routing graphs. In addition, as mentioned above, the controller cluster repeatedly performs these operations of updating and distributing the cost table, graphing and routing periodically (for example, once every 12 hours, 24 hours etc.) or when receiving updates measuring instruments for MFNs. [00114] [00114] Whenever the routing table in an MFN C CFE; points to a next-hop C MFN CFE, CFE C; considers C; as a neighbor. In some modalities, CFE C; establishes a secure and actively maintained VPN tunnel for CFE C; A secure tunnel in some embodiments is a tunnel that requires the payloads of encapsulated data messages to be encrypted. In addition, in some modalities, a tunnel is actively maintained by one or both end points of the tunnel that send signals to keep alive to the other end point. [00115] [00115] In other modalities, CFEs do not establish secure and actively maintained VPN tunnels. For example, in some modalities, the tunnels between the CFEs are static tunnels that are not actively monitored through the transmission of signals to keep alive. In addition, in some modalities, these tunnels between CFEs do not encrypt their payloads. In some embodiments, tunnels between pairs of CFEs include two encapsulation headers, with the inner header identifying the tenant ID and the incoming and outgoing CFEs of a data message entering and leaving the virtual network (ie, entering and leaving the virtual network) from the public cloud) and the external tunnel header specifying the source and destination network addresses (for example, IP addresses) to traverse zero or more CFEs from the incoming CFE to the outgoing CFE. [00116] [00116] In addition to the internal tunnels, in some modalities, the virtual network connects corporate computing nodes to their edge MFNs using VPN tunnels, as mentioned above. Therefore, in the ways in which secure tunnels are used to connect CFEs, data messages travel over the virtual network using a totally secure VPN path. [00117] [00117] As data messages from the virtual network are forwarded using encapsulation within the virtual network, the virtual network in some modalities uses its own unique network addresses that are different from the private addresses used by the different private networks of the tenant. In other modalities, the virtual network uses the public and private network address spaces of the public clouds on which it is defined. In yet other modalities, the virtual network uses some of its own unique network addresses for some of its components (for example, some of its MFNs, CFEs and / or services), while using the public and private network address spaces of public clouds to other components. [00118] [00118] In addition, in some modalities, the virtual network uses a clean communication platform with its own proprietary protocols. In the modalities in which data messages are routed entirely through software MFN routers (for example, by software CFEs), the virtual network can provide optimized rate control for long-distance end-to-end connections. This is done in some modalities, operating a TCP optimization proxy mechanism 220 in each MFN 150. In other modalities that do not break the TCP itself (for example, with HTTPS), this is accomplished by the proxy mechanism 220 segmenting the control of rate using flow intermediate buffering in conjunction with TCP receiver window and ACK manipulation. [00119] [00119] Due to its clean nature, the virtual network in some modalities optimizes many of its components to provide an even better service. For example, in some embodiments, the virtual network uses multipath routing to support premium guaranteed bandwidth VPN configurations that are routed over the virtual network. In some embodiments, these VPNs include state data on each MFN similar to ATM / MPLS routing, and their establishment and removal are centrally controlled. Some modalities identify the available bandwidth per outbound link, measuring it directly (through a pair of packets or a similar process) or because it has a certain capacity for the link and reducing the traffic that is already sent from that capacity. through that link. [00120] [00120] Some modalities use the residual bandwidth of a link as a restriction. For example, when a link does not have at least 2 Mbps of available bandwidth, the controller cluster of some modalities removes the link from the set of links that are used to calculate the least cost path (for example, the shortest path ) to any destination (for example, remove the link from the routing graph, such as graph 400). If an end-to-end route is still available after removing this link, new VPNs will be routed through that new route. Removing the VPN can bring back the available capacity for a given link, which, in turn, can allow that link to be included in the calculation of the least cost path (for example, shortest path). Some modalities use other options for multipath routing, such as balancing traffic load on multiple paths, for example, using MPTCP (TCP multipath). [00121] [00121] Some modalities provide a better service for premium customers, exploiting path parallelism and cheap cloud links to duplicate traffic from incoming MFNs to outgoing MFN, through two separate paths (for example, paths maximally separate) on the virtual network. Under this approach, the first incoming message is accepted and the later discarded. This approach increases the reliability of the virtual network and reduces the delay, at the cost of increasing the complexity of outbound processing. In some of these modalities, direct error correction (FEC) techniques are used to increase reliability and reduce duplication traffic. Due to its clean nature, the virtual network of some modalities performs other top layer optimizations, such as application layer optimizations (for example, deduplication and caching operations) and security optimizations (for example, adding encryption, DPI (deep packet inspection) and firewall). [00122] [00122] The virtual network of some modalities is responsible for collaboration with cloud providers, to further improve the configuration of the virtual network using anycast messaging system. For example, in some modalities when all MFNs obtain the same external IP address, it is easier to connect any new corporate computing node to an ideal edge node (for example, the nearest edge node) using an anycast connection. Likewise, any SaaS provider can obtain this | P address and connect to the ideal MFN (for example, the nearest MFN). [00123] [00123] As mentioned above, different modalities use different types of VPN connections to connect corporate computing nodes (for example, branches and mobile devices) to MFNs that establish a corporate entity's virtual network. Some modalities use IPsec to configure these VPN connections. Figure 6 illustrates the | Psec data message format for some modalities. Specifically, this figure illustrates an original format of a 605 data message generated by a machine on the corporate computing node and an IPsec 610 encapsulated data message after the 605 data message has been encapsulated (for example, on the corporate computing node or in [00124] [00124] In this example, the IPsec tunnel is configured with ESP Tunnel Mode, port 50. As shown, this mode is configured in this example, replacing the TCP protocol identifier in the IP header with an ESP protocol identifier. The ESP header identifies the beginning of message 615 (i.e., header 620 and payload 625). Message 615 must be authenticated by the recipient of the IPsec encapsulated data message (for example, by the MFN's IPsec communication port). The start of payload 625 is identified by the value of the next field 622 of message 615. In addition, payload 625 is encrypted. This payload includes the IP header, the TCP header, and the payload of the original data message 605, as well as a padding field 630, which includes the next field 622. [00125] [00125] In some modalities, each MFN IPsec communication port can handle multiple connections to the same or different virtual network tenants (for example, to the same company or to different companies). Therefore, an IPsec MFN communication port (e.g. communication port 230) in some ways identifies each IPsec connection in terms of a tunnel ID, a tenant ID (TID) and a corporate subnet of the compute node. . In some modalities, different corporate nodes (for example, different branches) of a tenant do not have overlapping IP subnets (according to RFC 1579). The IPsec communication port in some embodiments has a table that maps each IPsec tunnel ID (which is contained in the IPsec tunnel header) to a tenant ID. For a given tenant with which a | Psec communication port is configured to handle, the IPsec communication port also has a mapping of all subnets of that tenant that connect to the virtual network established by the MFNs and their routing elements. a cloud. [00126] [00126] When a first incoming MFN in a first public cloud data center receives through a IPsec tunnel a data message associated with a tenant ID and destined for a destination (for example, a branch subnet) or data center or a SaaS provider) that connects to a second outgoing MFN in a second public cloud data center, the IPsec communication port of the first MFN removes the IPsec tunnel header. In some embodiments, the CFE of the first MFN encapsulates the message with two encapsulation headers that allow the message to cross a path from the first incoming MFN to the second outgoing MFN, either directly or through one or more intermediate MFNs. The CFE of the first MFN identifies this path using its routing table configured by the controller. [00127] [00127] As mentioned above, the two encapsulation headers in some embodiments include (1) an external header that specifies the next hop MFN CFE to allow the encapsulated data message to pass through the MFNs of the virtual network to reach the CFE of Outgoing MFN and (2) an internal header that specifies the tenant ID and CFEs of the incoming and outgoing MFN that identify the MFNs of the data message entering and leaving the virtual network. [00128] [00128] Specifically, in some embodiments, the internal tunnel header includes a valid IP header with the destination | P address of the second outgoing MFN CFE and the source | P address of the first incoming MFN CFE. This approach allows standard IP router software to be used on all MFN CFE. The package also includes the tenant ID (for example, a customer CID). When a message arrives at the CFE of the second outgoing MFN, it is decapsulated and sent by the second MFN to its destination (for example, sent over the second MFN's IPsec communication port to the destination through another IPsec tunnel associated with the tenant ID and the destination subnet of the message). [00129] [00129] Certain cloud providers prohibit machines from "spoofing" the source IP and / or impose other restrictions on TCP and UDP traffic. To deal with these possible restrictions, some modalities use the external header to connect neighboring pairs of MFNs that are used by one or more routes. This header in some modalities is a UDP header that specifies the source and destination IP addresses and UDP protocol parameters. In some embodiments, the incoming MFN CFE specifies its IP address as the source | P address of the external header, while specifying the MFN CFE's next hop address | P as the destination | address of the external header. [00130] [00130] When the path to the outgoing MFN CFE includes one or more intermediate MFN CFEs, an intermediate CFE replaces the source IP address in the external header of the double encapsulated message it receives with its IP address. It also uses the destination | P address in the internal header to perform a route search in its routing table to identify the destination IP address of the next hop MFN CFE that is on the way to the destination | P address of the header internal. The intermediate CFE replaces the destination IP address in the external header with the IP address identified by looking up the route table. [00131] [00131] When the double encapsulated data message reaches the outgoing MFN CFE, the CFE determines that it is the output node of the data message when it retrieves the destination IP address in the internal header and determines that the destination | P address belongs to him. This CFE removes the two encapsulation headers from the data message and sends them to the destination (for example, via the MFN's IPsec communication port to the destination through another IPsec tunnel associated with the tenant ID and the IP address or subnet of destination in the original data message header). [00132] [00132] Figure 7 illustrates an example of the two encapsulation headers of some modalities, while Figure 8 presents an example that illustrates how these two headers are used in some modalities. In the discussion below, the internal header is called the tenant header, as it includes the tenant ID, along with the identity of the virtual network input / output nodes connected to the tenant's corporate computing end nodes. The outer header is referred to below as the VN hop tunnel header, because it is used to identify the next hop across the virtual network, as the data message travels a path through the virtual network between the incoming and outgoing MFN CFEs . [00133] [00133] Figure 7 shows a VN 705 hop tunnel header and a tenant tunnel header 720 encapsulating an original data message 750 with an original header 755 and a payload 760. As shown, the tunnel header of hop of VN 705 in some modalities includes a UDP header 710 and an IP header 715. The UDP header, in some modalities, is defined according to a UDP protocol. In some modalities, the VN hopping tunnel is a standard UDP tunnel, while in other modalities, this tunnel is a proprietary UDP tunnel. In yet other modalities, this tunnel is a standard or proprietary TCP tunnel. The 705 tunnel header in some modalities is an encrypted one that encrypts its payload, while in other modalities it is an unencrypted tunnel. [00134] [00134] As described below, tunnel header 705, in some embodiments, is used to define an overlapping VNP network and is used by each MFN CFE to achieve the next hop MFN CFE over public cloud networks underlying. As such, IP header 715 of tunnel header 705 identifies the source and destination | P addresses of the first and second CFEs of the first and second neighboring MFNs connected by the VNP tunnel. In some cases (for example, when the next hop destination MFN is on a public cloud other than a public cloud provider other than the originating MFN), the source and destination | P addresses are public IP addresses used by the centers public cloud data that includes MFNs. In other cases, when the source and destination MFN CFEs belong to the same public cloud, the source and destination | P addresses can be private IP addresses used only in the public cloud. Alternatively, in these cases, the source and destination IP addresses may still be public IP addresses from the public cloud provider. [00135] [00135] As shown in Figure 7, the tenant tunnel header 720 includes a | P 725 header, a tenant ID field 730 and a virtual circuit label (VCL) 735. The tenant tunnel header 720 is used by each hop CFE after the incoming hop CFE to identify the next hop to forward the data message to the outgoing MFN of the outgoing MFN. As such, the IP header 725 includes a source IP address which is the incoming CFE's IP address and a destination | P address which is the outgoing CFE's IP address. As with the source and destination IP addresses of the VN 705 hop header, the source and destination | P addresses of the tenant header 720 can be private | P addresses of a public cloud provider (when the data message travels through a route that only passes through the data center of a public cloud provider) or public addresses | P from one or more public cloud providers (for example, when the data message travels through a route that passes through two or two data centers) more public cloud providers). [00136] [00136] The IP header of the tenant header 720 can be routed using any standard software router and IP routing table in some modalities. Tenant ID field 730 contains the tenant ID, which is a unique tenant identifier that can be used in incoming and outgoing MFNs to uniquely identify a tenant. The virtual network provider in some ways defines different tenant IDs for different corporate entities that are tenants of the provider. The VCL 735 field is an optional routing field that some modalities use to provide an alternative way (non-IP-based way) to route messages across the network. In some modalities, [00137] [00137] Figure 8 presents an example that illustrates how these two tunnel headers 705 and 710 are used in some modalities. In this example, data messages 800 are sent from a first machine 802 (for example, first VM) at a first branch 805 of a company to a second machine 804 (for example, a second VM) at a second branch 810 of the company. The two machines are on two different subnets, 10.1.0.0 and 10.2.0.0, with the first machine having an IP address 10.1.0.17 and the second machine having an IP address 10.2.0.22. In this example, the first branch 805 connects to an incoming MFN 850 in a first public cloud data center 830, while the second branch 810 connects to an outgoing MFN 855 in a second public cloud data center 838. In addition, in this example, incoming and outgoing MFNs 850 and 855 from the first and second public cloud data centers are indirectly connected via an intermediate MFN 857 from a third public cloud data center 836. [00138] [00138] As shown, data message 800 from machine 802 is sent to the incoming MFN 850 over an IPsec 870 tunnel that connects the first branch 805 to the incoming MFN 850. This IPsec 870 tunnel is established between a | Psec 848 communication port of the first branch and an | Psec 852 communication port of the incoming MFN 850. This tunnel is established by encapsulating the data message 800 with an IPsec tunnel header 806. [00139] [00139] The IPsec communication port 852 of the MFN 850 decapsulates the data message (ie, removes the IPsec tunnel header 806) and passes the decapsulated message to the CFE 832 of this MFN directly or through one or more service machines intermediate box (for example, through a firewall machine, such as machine 210 in Figure 2). When passing this message, the communication port [00140] [00140] Based on the associated tenant ID and / or IPsec tunnel ID, the MFN 850's CFE 832 identifies a route for the message to the destination machine's subnet (ie, to the second 810 branch) through the virtual network established by MFNs in the different public cloud data centers. For example, CFE 832 uses the tenant ID and / or the IPsec tunnel ID to identify the company's routing table. In this routing table, the CFE 832 uses the destination | P address 10.2.0.22 of the received message to identify a record that identifies the CFE 853 of the MFN 855 output of the public cloud data center 838 as the outgoing routing node for destination for data message 800. In some embodiments, the identified record maps the entire subnet 10.2.0.0/16 of the second branch 810 to CFE 853 of MFN 855. [00141] [00141] After identifying the outgoing CFE 853, the incoming MFN 850's CFE 832 encapsulates the received data message with an 860 encapsulation header which, in its IP 725 header, includes the inbound CFE 832 IP and the destination IP of the outgoing CFE 853. In some embodiments, these IP addresses are defined in the public IP address space. The tunnel header 860 also includes the tenant ID that was associated with the data message at the MFN 850 entry. As mentioned above, this tunnel header also includes the value of the VCL header in some embodiments. [00142] [00142] In some embodiments, the incoming CFE 832 also identifies the next hop MFN that is on the desired CFE routing path for the outgoing CFE 853. In some embodiments, the incoming CFE 832 identifies that next hop CFE in its routing table using the outgoing CFE 853 destination address | P. [00143] [00143] After identifying the next hop MFN CFE, the incoming MFN CFE encapsulates the encapsulated data message 800 with a second VN 862 hop tunnel header. This tunnel header allows the message to be routed to the next hop CFE 856. In the IP header 715 of that outer header 862, the incoming MFN 832 CFE specifies the source and destination | P addresses as the source IP of the incoming CFE 832 and the destination | P of the intermediate CFE 856. It also specifies its Layer 4 protocol as being UDP in some modalities. [00144] [00144] When the CFE 856 of the third MFN 857 receives the data message with double encapsulation, it removes the second hop tunnel header from VN 862 and the extracts from the tenant header 860, the destination address | P from the CFE 853 outgoing MFN 855. Since this address | P is not associated with the CFE 856, the data message still needs to be forwarded to another MFN to reach its destination. Thus, the CFE 856 uses the extracted destination | P address to identify a record in its routing table that identifies the next hop's MFN 853 CFE. It then changes and re-encapsulates the data message with the external header 705 and specifies the source and destination | P addresses in its IP 715 header as its own address | P and the destination address | P of the MFN CFE 853. Then, CFE 856 forwards the double encapsulated data message 800 to the outgoing CFE 853 through the intermediate routing loop of public cloud data centers 836 and 838. [00145] [00145] After receiving the encapsulated data message, the outgoing CFE 853 determines that the encapsulated message is directed to it when it retrieves the destination address | P in the internal header 860 and determines that this destination address | P belongs to it. Outbound CFE 853 removes encapsulation headers 860 and 862 from data message 800 and extracts the destination | P address in the original header of the data message. This destination | P address identifies the IP address of the second machine 804 on the subnet of the second branch. [00146] [00146] Using the tenant ID in the removed tenant tunnel header 860, the outgoing CFE 853 identifies the correct routing table to be searched and then searches that routing table based on the extracted destination | P address of the original header value of the received data message. In this research, the outgoing CFE 853 identifies a record that identifies the l | Psec connection to be used to forward the data message to its destination. It then delivers the data message along with the IPsec connection identifier to the IPsec 858 communication port of the second MFN, which encapsulates that message with an IPsec 859 tunnel header and forwards it to an IPsec 854 communication port of the second branch 810. Communication port 854 removes the IPsec tunnel header and forwards the data message to its destination machine 804. [00147] [00147] Several more message processing examples will now be described with reference to Figures 9 to 15. In these examples, it is assumed that each tenant IPsec interface is at the same local public | P address, as well as the VNP tunnels. As such, interfaces in some modalities are attached to a single VRF namespace (virtual routing and routing). This VRF namespace is referred to below as the VNP namespace. [00148] [00148] Figures 9 to 11 illustrate the message handling processes 900 to 1100 that are performed respectively by the incoming, intermediate and outgoing MFNs when they receive a message that is sent between two computing devices at two different external machine locations (e.g., branches, data centers, etc.) from a tenant. In some embodiments, controller cluster 160 configures the CFE of each MFN to operate as an incoming, intermediate and outgoing CFE, when each of these CFE is a candidate to serve as an incoming, intermediate and outgoing CFE for different a tenant's data message flows. [00149] [00149] Processes 900 to 1100 will be explained below by reference to two examples in Figures 8 and 12. As mentioned above, Figure 8 illustrates an example when the data message passes through an intermediate MFN to reach the outgoing MFN. Figure 12 illustrates an example that does not involve an MFN intermediate between the incoming and outgoing MFNs. Specifically, this illustrates a data message 1200 being sent from a first device 1202 on a first branch 1205 to a second device 1210 on a second branch 1220 when the second branch connects to two public data centers 1230 and 1238 with two 1250 and 1255 MFNs that are directly connected. As shown, CFEs 1232 and 1253 of MFNs in these examples perform the routing operations associated with each MFN. [00150] [00150] The incoming CFE (for example, incoming CFE 832 or 1232) of incoming MFNs 850 and 1250 executes process 900 in some modalities. As shown in Figure 9, the entry process 900 begins by initially identifying (at 905) the tenant routing context based on the IPsec tunnel identifier (for example, 806 or 1206) in the received data message. In some embodiments, IPsec communication ports or other MFN modules store tenant IDs for IPsec tunnel IDs in the mapping tables. Whenever a data message is received over a specific IPsec tunnel, the l | Psec communication port extracts the IPsec tunnel ID, which that communication port or other MFN module uses to identify the associated tenant ID by reference to your mapping table. When identifying the tenant ID, the process identifies the tenant routing table or the tenant portion of the VRF namespace to be used. [00151] [00151] At 910, the process increments the RX (receipt) counter of the identified IPsec tunnel to account for the receipt of this data message. Then, in 915, the process performs a route search (for example, a longer prefix match, LPM, search) in the context of identified tenant routing (for example, in the tenant portion of the VRF namespace) for identify the outgoing interface | P address to exit the tenant's virtual network built on the public cloud data centers. For branch-to-branch examples, the outgoing interface is the IP address of an outgoing CFE (for example, CFE 853 or 1253) of an MFN connected to the destination branch. [00152] [00152] In 920, the process adds a tenant tunnel header (for example, header 860 or 1260) to the data message received and incorporates the source | P address of the incoming CFE (for example, incoming CFE 832 or 1252) and the destination | P address of the outgoing CFE (for example, outgoing CFE 853 or 1253) as the source and destination IP addresses in this tunnel header. In the tenant header, the process also stores the tenant ID (identified in 905) in the tenant header. In 920, the process adds a VN jump tunnel header (for example, header 862 or 1262) outside the tenant header and stores its address | P as the source address | P in that header. The process also specifies (in 920) the UDP parameters (for example, UDP port) of the VNP tunnel header. [00153] [00153] Then, in 925, the process increases the VN transmission counter for the lessee to account for the transmission of that data message. At 930, the process performs a route search (for example, an LPM search) in the context of identified VNP routing (for example, in the VNP portion of the VRF namespace) to identify the next hop interface for that data message . In some embodiments, this route search is an LPM search (for example, in the VNP portion of the VRF namespace) that is at least partially based on the destination | P of the outgoing CFE. [00154] [00154] In 935, the process determines whether the next hop output interface is a local interface (for example, a physical or virtual port) of the incoming CFE. In this case, the process defines (in 937) the destination | P address in the external tunnel header of the VN hop as the outgoing interface | P address identified in 915. Then, in 940, the process provides the message of double encapsulated data to its local interface, so it can be forwarded to the destination's outgoing CFE. After 940, process 900 ends. [00155] [00155] Figure 12 illustrates an example of operation 905 to 940 for data message 1200 that incoming CFE 1232 receives from device 1202 of first branch 1205. As shown, the MFN 1250 of that CFE receives that data message as a IPsec encapsulated message on its IPsec communication port 1252 of IPsec communication port 1248 of the first branch 1205. The incoming CFE 1232 encapsulates the received message 1200 (after the IPsec header of the same has been removed by an IPsec communication port 1252 ) with a VN hopping tunnel header 1262 and a tenant tunnel header 1260 and forwards that double encapsulated message to the CFE 1253 outgoing MFN 1255 from the public cloud [00156] [00156] When the process determines (in 935) that the next hop output interface is not a local interface of the incoming CFE, but the destination IP address of another router, the process is incorporated (in 945) in the header VN hopping tunnel, the destination IP address of the [00157] [00157] Then, at 950, the process performs another route search (for example, an LPM search) in the routing context of the identified VNP (for example, in the VNP portion of the VRF namespace). This time, the search is based on the IP address of the intermediate CFE identified in the VNP tunnel header. Since the intermediate CFE (for example, CFE 856) is a next-hop CFE in the virtual network for the incoming CFE (for example, CFE 832), the routing table identifies a local interface (for example, a local port) for data messages sent to the intermediate CFE. Thus, this search in the context of VNP routing identifies a local interface, to which the incoming CFE provides (in 950) the encapsulated message twice. The process then increments (by 955) the intermediate VN counter to account for the transmission of that data message. After 955, the process ends. [00158] [00158] Figure 10 illustrates a process 1000 that a CFE (for example, CFE 853 or 1253) of an outgoing MFN performs in some ways when it receives a data message that must be forwarded to a corporate computing node (for example , a branch, data center, remote user location) connected to the MFN. As shown, the process initially receives (in 1005) the data message on an interface associated with the virtual network. This message is encapsulated with the VN hopping tunnel header (for example, header 862 or 1262) and the tenant tunnel header (for example, header 860 or 1260). [00159] [00159] In 1010, the process determines that the destination IP address in the VN hop tunnel header is the CFE destination | P address (for example, CFE 853 or 1253 IP address). Then, in 1015, the process removed the two tunnel headers. The process retrieves (in 1020) the tenant ID from the removed tenant's tunnel header. To account for the received data message, the CFE increments (by 1025) the RX (receipt) counter that it maintains for the tenant specified by the extracted tenant ID. [00160] [00160] Then, in 1030, the process performs a route search (for example, an LPM search) in the context of identified tenant routing (that is, in the context of tenant routing identified by the tenant ID extracted in 1020) to identify the next hop interface for this data message. The process performs this search based on the destination IP address in the original header (for example, header 755) of the data message received in some ways. From the record identified through this search, process 1000 identifies the IPsec interface through which the data message must be sent to its destination. Consequently, process 1000 sends the received decapsulated data message to the IPsec communication port of your MFN (for example, communication port 858 or 1258). [00161] [00161] This communication port encapsulates the data message with an IPsec tunnel header (for example, tunnel header 859 or 1259) and forwards it to a communication port (for example, communication port 854 or 1254) on the node corporate computing destination (for example, destination branch), where it will be decapsulated and forwarded to its destination. After 1030, the CFE or its MFN increases (by 1035) the counter it maintains to transmit messages over the IPsec connection to the target corporate computing node (for example, the IPsec connection between communication ports 854 and 858 or between communication ports 1254 and 1258). [00162] [00162] Figure 11 illustrates a process 1100 that a CFE (for example, CFE 856) of an intermediate MFN performs in some modalities when it receives a data message that must be forwarded to another CFE of another MFN. As shown, the process initially receives (at 1105) the data message on an interface associated with the virtual network. In some embodiments, this message is encapsulated with two tunnel headers, the VN tunnel header (for example, 862 header) and the tenant tunnel header (for example, 860 header). [00163] [00163] In 1110, the process ends the VN hop tunnel, as it determines that the destination | P address in this tunnel header is the destination | P address of the CFE (for example, it is the destination IP address of the CFE 856 ). Then, in 1115, the process determines whether the VN hop tunnel header specifies the correct UDP port. Otherwise, the process ends. Otherwise, in 1120, the process removes the tunnel header from the VN hop. To account for the received data message, the CFE increases (by 1125) the RX (receipt) counter it maintains to quantify the number of messages it received as an intermediate hop CFE. [00164] [00164] In 1130, the process performs a route search (for example, an LPM search) in the context of identified VNP routing (for example, in the VNP portion of the VRF namespace) to identify the next hop interface for that data message. In some embodiments, this route search is an LPM search (for example, in the VNP portion of the VRF namespace) that is at least partially based on the destination IP of the outgoing CFE that is identified in the tenant tunnel header internal. [00165] [00165] The process then determines (in 1135) whether the next hop output interface is a local interface of the intermediate CFE. In this case, the process adds (in 1140) the VN hop tunnel header to the data message, which is already encapsulated with the tenant's tunnel header. The process sets (in 1142) the destination IP address in the VN hop tunnel header to the destination CFE outbound address specified in the tenant tunnel header. It also sets (in 1142) the source | P address in the VN jump tunnel header to the IP address of your CFE. In this tunnel header, the process also defines UDP attributes (for example, the UDP port, etc.). [00166] [00166] Then, in 1144, the process provides the double encapsulated data message on its local interface (identified in 1130) so that it can be forwarded to the destination outgoing CFE. An example of this VN jump tunnel decapping and routing was described above with reference to the CFE 856 operations in Figure 8. To account for the received data message, the CFE increments (in 1146) the TX (transmission) counter that it maintains to quantify the number of messages he transmitted as an intermediate hop CFE. After 1146, the 1100 process ends. [00167] [00167] On the other hand, when the process determines (in 1135) that the next hop output interface is not a local interface of your CFE, but the destination | P address of another router, the process adds (in 1150) a VN hopping tunnel header to the data message from which it previously removed a VN hopping tunnel header. In the new VN hop tunnel header, process 1100 incorporates (in 1150) the source IP address of your CFE and the destination IP address (identified in 1130) of the next next hop intermediate CFE as the source | P address and destination of the VN jump tunnel header. This VNP tunnel header also specifies a Layer 4 UDP protocol with a UDP destination port. [00168] [00168] Then, in 1155, the process performs another route search (for example, an LPM search) in the context of identified VNP routing (for example, in the VNP portion of the VRF namespace). This time, the search is based on the | P address of the next hop intermediate CFE, identified in the new VN hop tunnel header. Since this intermediate CFE is the next hop of the current intermediate CFE in the virtual network, the routing table identifies a local interface for data messages sent to the next hop intermediate CFE. Therefore, this research in the context of VNP routing identifies a local interface, to which the current intermediate CFE provides the encapsulated message twice. [00169] [00169] Figure 13 illustrates a message handling process 1300 that is performed by the incoming MFN CFE when it receives a message to a tenant that is sent from a tenant's corporate computing device (for example, at a branch) to another tenant machine (for example, at another branch, tenant data center, or data center for a SaaS provider). Process 900 in Figure 9 is a subset of this process 1300, as described below. As shown in Figure 13, process 1300 begins by initially identifying (at 905) the tenant routing context based on the identifier of the incoming IPsec tunnel. [00170] [00170] In 1310, the process determines whether the source and destination | P addresses in the header of the received data message are public IP addresses. In this case, the process (in 1315) discards the data message and increments the discard counter it maintains for the IPsec tunnel of the received data message. In 1315, the process discards the counter because it should not receive messages addressed to and from public addresses | P when it receives messages through the tenant's IPsec tunnel. In some embodiments, process 1300 also sends an ICMP error message back to the source corporate computing machine. [00171] [00171] On the other hand, when the process determines (in 1310) that the data message does not come from a public | P address and goes to another public IP address, the process determines (in 1320) whether the destination IP address in the header of the received data message is a public | P address. In this case, the process changes to 1325 to execute process 900 in Figure 9, with the exception of operation 905, which was performed at the beginning of process 1300. After 1325, process 1300 ends. On the other hand, when process 1300 determines (in 1320) that the destination | P address in the header of the received data message is not a public IP address, the process increments (in 1330) the RX (receive) counter of the IPsec tunnel identified to account for the receipt of this data message. [00172] [00172] Process 1300 performs (in 1335) a route search (for example, an LPM search) in the context of identified tenant routing (for example, in the tenant portion of the VRF namespace). This search identifies the IP address of the outbound interface to exit the tenant's virtual network created over the public cloud data centers. In the example illustrated in Figure 13, process 1300 reaches search operation 1335 when the data message is sent to a machine in a SaaS provider data center. Therefore, this search identifies the IP address of the outgoing router to leave the tenant's virtual network to access the SaaS provider machine. In some modalities, all routes from the SaaS provider are installed in a route table or in a portion of the VRF namespace, while in other modalities the routes to different SaaS providers are stored in different route tables or in different portions of VRF namespace. [00173] [00173] In 1340, the process adds a tenant tunnel header to the received data message and incorporates the incoming CFE source address | P and the outgoing router destination address | P as the source address | P and destination in that tunnel header. Then, in 1345, the process increments the VN transmission counter for the lessee to account for the transmission of that data message. In 1350, the process performs a route search (for example, an LPM search) in the context of VNP routing (for example, in the VNP portion of the VRF namespace) to identify one of its local interfaces as the next interface. jump to that data message. When the next hop is another CFE (for example, in another public cloud data center), the process in some ways further encapsulates the data message with the VN hop header and incorporates the CFE's IP address and address | P of the other CFE as source and destination addresses of the VN hop header. In 1355, the process provides the data message encapsulated in its local interface so that it can be forwarded to the exit route. After 1355, the 1300 process ends. [00174] [00174] In some cases, the incoming MFN may receive a data message to a tenant that its CFE can forward directly to the destination machine of the data message without going through the CFE of another MFN. In some cases, the data message does not need to be encapsulated with a tenant header or a VN hop header when the CFE does not need to relay any tenant-specific information to any other subsequent VN processing module or the necessary information can be provided to the subsequent VN processing module through other mechanisms. [00175] [00175] For example, to directly forward a tenant's data message to an external SaaS provider data center, the incoming MFN's NAT mechanism 215 would have to perform a NAT operation based on the tenant identifier, as described below . The incoming CFE or other module in the incoming MFN must provide the tenant identifier to the associated NAT mechanism of the incoming MFN 215. When the incoming NAT and CFE mechanisms run on the same computer, some modalities share this information between these two modules , storing them in a shared memory location. On the other hand, when the CFE and NAT mechanisms are not running on the same computer, some modalities use other mechanisms (for example, out-of-band communication) to share the tenant ID between the incoming NAT and CEF mechanisms. In such cases, however, other modalities use an encapsulation header (that is, they use in-band communication) to store and share the tenant ID between the different modules of the incoming MFN. [00176] [00176] As - * described - below, - some modalities perform one or two NAT operations on the source IP addresses / port of a data message before sending the message outside a tenant's virtual network. Figure 14 illustrates the NAT operation being performed on the outgoing router. However, as described below, some modalities also perform another NAT operation on the data message on the incoming router, even though that extra NAT operation has not been described above with reference to Figure 13. [00177] [00177] Figure 14 illustrates a process 1400 that an outgoing router performs in some ways when it receives a data message that must be forwarded to a SaaS provider data center over the Internet. As shown, the process initially receives (at 1405) the data message on an interface associated with the virtual network. This message is encapsulated with the tenant tunnel header. [00178] [00178] In 1410, the process determines that the destination | P address in this tunnel header is the destination IP address of the router and therefore removes the tenant tunnel header. The process retrieves (in 1415) the tenant ID of the removed tunnel header. To account for the received data message, the process increments (by 1420) the RX (receipt) counter that it maintains for the tenant specified by the extracted tenant ID. [00179] [00179] Next, in 1425, the process determines whether the destination IP in the original data message header is public and accessible through a local interface (for example, local port) of the outgoing router. This local interface is an interface that is not associated with a VNP tunnel. Otherwise, the process ends. Otherwise, the process performs (at 1430) a source NAT operation to change the source IP addresses / port of the data message in the header of that message. The NAT operation and the reason for performing it will be described below, with reference to Figures 16 and 17. [00180] [00180] After 1430, the process performs (in 1435) a route search (for example, an LPM search) in the context of Internet routing (that is, in the Internet routing portion of the routing data, for example, space name of the router's Internet VRF) to identify the next hop interface for this data message. The process performs this search based on the destination network address (for example, destination IP address) of the original header of the data message received in some ways. From the record identified through this search, process 1400 identifies the local interface through which the data message must be sent to its destination. Therefore, in 1435, process 1400 provides the data message translated from the originating network address to its identified local interface to route to its destination. After 1435, the process increases (by 1440) the counter it maintains to transmit messages to the SaaS provider and ends. [00181] [00181] Figure 15 illustrates a message handling process 1500 that is performed by the incoming router that receives a message that is sent from a SaaS provider machine to a tenant machine. As shown, the input process 1500 starts by initially receiving (at 1505) a data message on a dedicated input interface with a public IP address that is used for various or all communications from the SaaS provider. In some ways, this input interface is a different interface with a different IP address than the one used to communicate with the virtual network. [00182] [00182] After receiving the message, the process performs (in 1510) a route search in a public Internet routing context using the destination address | P contained in the header of the received data message. Based on this research, the process determines (in 1515) whether the destination | P address is local and associated with an enabled NAT operation. Otherwise, the process ends. Otherwise, the process increases (in [00183] [00183] Then, in 1525, the process performs a reverse NAT operation that converts the destination IP addresses / port of the data message into new destination IP / port addresses that the virtual network associates with a specific tenant. This NAT operation also produces the tenant ID (for example, retrieves the tenant ID from a mapping table that associates the tenant IDs with the translated target IPs or retrieves the tenant ID from the same mapping table used to obtain the new ones. destination IP addresses / port). In some embodiments, process 1500 uses a connection record that process 1400 created when it performed (in 1430) its SNAT operation to perform (in 1525) its reverse NAT operation. This connection record contains the mapping between the internal and external IP / port addresses used by SNAT and DNAT operations. [00184] [00184] Based on the translated destination network address, the process performs (in 1530) a route search (for example, an LPM search) in the context of identified tenant routing (that is, the routing context specified by the ID tenant) to identify the outgoing interface | P address to exit the tenant's virtual network and reach the tenant's machine on a corporate computing node (for example, at a branch office). This outgoing interface is the IP address of an outgoing CFE to an outgoing MFN in some ways. In 1530, the process adds a tenant tunnel header to the incoming data message and incorporates the IP address of the incoming router and the IP address of the outgoing CFE as the source and destination | P addresses in that tunnel header. Then, in 1535, the process increments the VN transmission counter for the lessee to account for the transmission of that data message. [00185] [00185] In 1540, the process performs a route search (for example, an LPM search) in the routing context of the identified VÚNP (for example, in the VNP portion of the routing data, as in the VRF namespace of the router) to identify your local interface (for example, your physical or virtual port), to which the incoming router provides the encapsulated message. The process adds (in 1540) a VN hop header to the received data message and incorporates the incoming router's IP address and the next hop CFE's P address as the source and destination | P addresses of that hop header. of VN. After 1555, the process ends. [00186] [00186] As mentioned above, MFNs in some modalities include NAT 215 mechanisms that perform NAT operations on the data message input and / or output paths into and out of the virtual network. NAT operations are typically performed today in many contexts and by many devices (for example, routers, firewalls, etc.). For example, a NAT operation is usually performed when traffic leaves a private network to isolate the internal address space | P from the public address space | P regulated used on the Internet. A NAT operation normally maps a | P address to another | P address. [00187] [00187] With the proliferation of computers connected to the Internet, the challenge is that the number of computers exceeds the available number of addresses | P. Unfortunately, although there are [00188] [00188] While the Internet communication port of a private network obtains a registered public address on the Internet, each device within a private network that connects to that communication port receives an unregistered private address. The private addresses of the internal private networks can be in any range of IP addresses. However, the Internet Engineering Task Force (IETF) has suggested several ranges of private addresses for use by private networks. These ranges are generally not available on the public Internet, so that routers can easily distinguish between public and private addresses. These private address ranges are known as RFC 1918 and are: (1) Class A 10.0.0.0 - [00189] [00189] It is important to perform the conversion of the | P source in the data message flows that leave the private networks, so that the external devices can differentiate different devices in different private networks that use the same internal IP addresses. When an external device needs to send a reply message to the device within a private network, it must send its reply to a unique and routable public address on the Internet. It cannot use the original IP address of the internal device that can be used by multiple devices on multiple private networks. The external device sends its response to the public | P address with which the original NAT operation replaced the internal device's private source IP address. After receiving this response message, the private network (for example, the network communication port) performs another NAT operation to replace the public destination IP address in the response with the IP address of the internal device. [00190] [00190] Many devices within a private network and many applications running on those devices need to share one or a finite number of public IP addresses associated with the private network. Therefore, NAT operations also typically translate layer 4 port addresses (for example, UDP addresses, addresses [00191] [00191] As mentioned above, the virtual network provider of some modalities provides a virtual network as a service for different tenants in various public clouds. These tenants can use common | P addresses on their private networks and share a common set of network resources (for example, public IP addresses) from the virtual network provider. In some modalities, data traffic from different tenants is transported between the CFEs of the overlay network through tunnels and the tunnel marks each message with a [unique tenant ID. These tenant identifiers allow messages to be sent back to the source devices, even when the private tenant's IP spaces overlap. For example, tenant identifiers allow you to distinguish a message sent from a branch of tenant 17 with a source address [00192] [00192] Standard NATs implemented in accordance with RFC 1631 do not support the notion of location and, consequently, cannot distinguish between two messages with the same private | P addresses. However, in many virtual network deployments of some modalities, it is beneficial to use standard NAT mechanisms, as many mature open source, high performance implementations exist today. In fact, many Linux kernels today have NAT mechanisms functioning as standard features. [00193] [00193] To use standard NAT mechanisms for different tenants of virtual tenant networks, the virtual network provider of some modalities uses location mapping (TM) mechanisms before using standard NAT mechanisms. Figure 16 illustrates these TM 1605 mechanisms that are placed on each virtual network communication port 1602 that is on the virtual network outbound path to the Internet. As shown, each TM 1605 engine is placed before a NAT 1610 engine in the message output paths to the SaaS 1620 provider's data centers over the 1625 Internet. In some embodiments, each MF 215 NAT engine includes an engine TM (like the TM 1605 engine) and a standard NAT engine (like the NAT 1610 engine). [00194] [00194] In the example illustrated in Figure 16, the message flows come from two branches 1655 and 1660 and a data center 1665 from two tenants in the virtual network, and enter the virtual network 1600 through the same communication port 1670 , although this need not necessarily be the case. The virtual network 1600 in some modalities is defined in various public cloud data centers from various public cloud providers. In some modalities, the virtual network communication ports are part of the managed forwarding nodes and the TM mechanisms are placed before the NAT 1610 mechanisms in the outgoing MFNs. [00195] [00195] When a data message reaches an outbound communication port 1602 to exit the virtual network on its way to a 1620 data center from the SaaS provider, each TM 1605 engine maps the source network address (for example, IP source and / or port addresses) of these data messages to a new source network address (for example, source IP and / or port addresses) and the NAT 1610 engine maps the new source network address to another address source network (for example, another source IP and / or port addresses). In some embodiments, the TM engine is a stateless element and performs the mapping for each message through a static table without examining any dynamic data structures. As a stateless element, the TM engine does not create a connection record when it processes a first data message from a data message flow to use that connection record in the execution of its address mapping to process subsequent messages from the flow of data. data messages. [00196] [00196] On the other hand, the NAT 1605 mechanism in some modalities is a stateful element that performs its mapping by reference to a connection store that stores connection records that reflect its previous SNAT mappings. When the NAT mechanism receives a data message, in some embodiments, that mechanism first checks the storage of the connection to determine whether it previously created a connection record for the received message flow. In this case, the NAT mechanism uses the mapping contained in this record to perform its SNAT operation. Otherwise, it performs the SNAT operation based on a set of criteria that it uses to derive a new address mapping for the new data message flow. To do this, the NAT mechanism in some embodiments uses common network address translation techniques. [00197] [00197] In some embodiments, the NAT mechanism may also use connection storage in some embodiments when it receives a response data message from the SaaS provider machine, in order to perform a DNAT operation to forward the response data message to the tenant machine that sent the original message. In some embodiments, the connection record for each stream of processed data messages has a record identifier that includes the stream identifier (for example, five tuple identifier with the translated source network address). [00198] [00198] When doing their mapping, TM mechanisms ensure that data message flows from different tenants that use the same IP address and source port are mapped to unique, non-overlapping address spaces. For each message, the TM engine identifies the tenant ID and performs its address mapping based on that identifier. In some embodiments, the TM engine maps the source | P addresses of different tenants at different IP ranges, so that any two messages from different tenants are not mapped to the same | P address. [00199] [00199] Consequently, each type of network with a different tenant ID will be mapped to a unique address within the complete 232 region of IP address (0.0.0.0 to 255.255.255.255). Class A and B networks have 256 and 16 times more possible | P addresses than a class C network. Considering the size ratio of classes A, B and C networks, class A 256 network can be allocated as follows way: (1) 240 to map 240 tenants with class A network, (2) 15 to map 240 tenants with class B networks and (3) a single class A network to map 240 tenants with class C networks. More specifically, in some modalities, lower class A networks (starting with O.Xx.XX / 24, 1.XXX/24 ... to 239.xxx/24) will be used to map addresses coming from class 10 .x of the class A network to 240 different class A target networks. The next 15 class A 240.x.x.x / 24 to 254.x.x.x / 24 networks, each will be used to include each of the 16 class B networks (for example, for a total of 240 networks (15 * 16)). The latest class A network 255.x.x.x / 24 will be used to include up to 256 private class C networks. Even though 256 tenants can be installed, only 240 are used and 16 class C networks are not used. To summarize, some modalities use the following mapping: º 10xxxX / 24 networks> 1xxx / 24-239.x.x.x / 24 networks, resulting in 240 different mappings for each tenant; . 172.16-31.x.x / 12 networks> 240.x.x.x / 24-254 .Xx.X.Xx / 24, resulting in 240 different mappings for each tenant; [00200] [00200] The schemes described above can support up to 240 tenants, as long as it is not known in advance what type of network class the tenants will use. In some embodiments, the public cloud network uses a private IP address. In that case, it is desirable not to map into the private address space again. As some modalities remove a class A network and a class B network, there are only 239 different tenants that can be supported in these modalities. To achieve a unique mapping, some modalities number all tenant IDs from 1 to 239 and add the least significant 8 bits of the unmasked part of the private domain to the tenant ID module (expressed in 8 bits) 240. In this case, for class A addresses, the first tenant (number 1) will be mapped to [00201] [00201] In the implementation illustrated in Figure 16, some modalities provide each TM 1605 engine with any possible tenant ID subnets and a way to route messages back to any specific IP address on each of these subnets. This information can change dynamically when renters, branches and mobile devices are added or removed. Therefore, this information must be dynamically distributed to the TM mechanisms on the outbound communication ports of the virtual network. The amount of information distributed and updated regularly can be large, as the outgoing Internet communication ports of the virtual network provider can be used by a large number of tenants. In addition, the 240 (or 239) tenant ID restriction is global and can be resolved only by adding multiple | P addresses to the exit points. [00202] [00202] Figure 17 illustrates a dual NAT approach that is used in some modalities, instead of the single NAT approach illustrated in Figure 16. The approach illustrated in Figure 17 requires that less tenant data be distributed to most mechanisms if not all, and allows more private tenant networks to be mapped to the virtual network provider’s internal network. For a stream of data messages that traverses from a tenant machine over the 1700 virtual network and then from the 1625 Internet to another machine (for example, to a machine in a SaaS provider 1620 data center), the illustrated approach in Figure 17 places a NAT mechanism on the incoming communication port of data stream 1770 to the virtual network and on the outgoing communication port of that stream 1702 or 1704 out of the virtual network and to the Internet 1625. This approach also places the TM 1705 mechanisms before the NAT 1712 mechanisms of the incoming communication ports [00203] [00203] In the example illustrated in Figure 17, the message flows come from two branches 1755 and 1760 and from a data center 1765 from two virtual network tenants, and enter the virtual network 1700 through the same 1770 incoming communication port , although this need not necessarily be the case. Like the virtual network 1600, the virtual network 1700 in some ways is defined in various public cloud data centers from various public cloud providers. In addition, in some modalities, the virtual network communication ports 1702, 1704 and 1770 are part of the managed forwarding nodes, and the TM mechanisms are placed in these modalities before the NAT 215 mechanisms in these MFNs. [00204] [00204] The TM 1605 and 1705 mechanisms operate similarly in Figures 16 and 17. Like the TM 1605 mechanisms, the TM 1705 mechanism maps the source | P and port addresses of data messages entering the network source for new | P and source port addresses, when these data messages are destined for (that is, they have destination IP addresses for) SaaS 1620 provider data centers. For each of these data messages, the TM 1705 identifies the tenant ID and performs its address mapping based on that identifier. [00205] [00205] Like the mechanisms of TM 1605, the mechanism of TM 1705 in some modalities is a stateless element and performs the mapping for each message through a static table without examining any dynamic data structure. As a stateless element, the TM engine does not create a connection record when it processes a first data message from a data message flow to use that connection record in the execution of its address mapping to process subsequent messages from the flow of data. data messages. [00206] [00206] When mapping, TM 1705 mechanisms on incoming communication ports 1770 ensure that data message flows from different tenants using the same source IP and port addresses are mapped to non-overlapping address spaces exclusive. In some embodiments, the TM engine maps the source | P addresses of different tenants at different IP ranges, so that any two messages from different tenants are not mapped to the same IP address. In other embodiments, the TM 1705 engine can map the source IP addresses of two different tenants to the same source IP range, but different source port ranges. In still other modalities, the TM mechanism maps two tenants to different source IP ranges, while it maps two other tenants to the same source IP range, but different source port ranges. [00207] [00207] Unlike the TM 1605 mechanisms, the TM 1705 mechanisms on the incoming communication ports of the virtual network need only identify tenants for branches, corporate data centers and corporate computing nodes connected to the incoming communication ports. This significantly reduces the tenant data that needs to be provided initially and updated periodically for each [00208] [00208] The NAT mechanism 1712 of the incoming communication port 1770 in some modalities may use external public IP addresses or specific internal IP addresses of the public cloud (for example, AWS, GCP or Azure) in which the incoming communication port 1770 lies. In either case, the NAT 1712 engine maps the source network address of a received message (that is, a message that enters the virtual network 1700) to a unique | P address on the private cloud network of the communication port. input. In some embodiments, the NAT 1712 engine converts the source | P address of each tenant's data message flows into a different unique | P address. In other modalities, however, the NAT 1712 engine converts the source | P addresses of different message tenants' data message flows to the same | P address, but uses the source port addresses to differentiate data message flows from different tenants. In yet other modalities, the NAT mechanism maps the source | P addresses of two tenants to different source IP ranges, while it maps the source | P addresses of two other tenants to the same source IP range, but at intervals different source ports. [00209] [00209] In some modalities, the NAT 1712 mechanism is a stateful element that performs its mapping by reference to a connection store that stores connection records that reflect its previous SNAT mappings. In some embodiments, the NAT mechanism may also use connection storage in some embodiments when it receives a response data message from the SaaS provider machine, in order to perform a DNAT operation to forward the response data message to the tenant machine. who sent the original message. The TM and NAT mechanisms 1705, 1710 and 1712 are configured in some ways by the controller cluster 160 (for example, tables are provided to describe the mapping to be used for different tenants and different ranges of network address space). [00210] [00210] Figure 18 presents an example that illustrates the source port translation of the incoming NAT mechanism 1712. Specifically, it shows the source address mapping that the 1705 location mapping mechanism and the 1712 incoming NAT mechanism perform on a data message 1800 when they enter the virtual network 1700 through an incoming communication port 1770 and when they leave the virtual network at an outgoing communication port 1702. As shown, a tenant communication port 1810 sends the data message 1800, which arrives at the IPsec 1805 communication port with a source IP address 10.1.1.13 and source port address 4432. In some embodiments, these source addresses are addresses used by a tenant machine (not shown), while in other modalities, one or both of the source addresses are source addresses produced by a source NAT operation performed by the tenant communication port or other network element in the data center tenant data. [00211] [00211] After this message has been processed by the IPsec 1805 communication port, this communication port or another incoming MFN module associates this message with tenant ID 15, which identifies the tenant of the virtual network to which the 1800 message belongs. Based on this tenant ID, tenant mapping mechanism 1705 maps the source | P and port addresses to the source IP pair and port address of 15.1.1.13 and 253, as shown. This source | P and port addresses uniquely identify the message flow of the data message [00212] [00212] The incoming NAT mechanism 1712 then converts (1) the source IP address of the 1800 data message into a private or public (internal or external) | P address of 198.15.4.33 and (2) the address of the source port of that message at the port address [00213] [00213] In some embodiments, the source port address assigned by the SNAT operation of the incoming NAT mechanism is also the source port address that is used to differentiate different message flows outside the 1700 virtual network. This is the case in example illustrated in Figure 18. As shown, the NAT 1710 outbound mechanism in this example does not change the address of the data message source port when it performs its SNAT operation. Instead, it just changes the source | P address to an external 198.15.7.125 address, which in some ways is the public | P address of the outgoing communication port (or ports) of the virtual network. This public IP address in some modalities is also a | P address of the public cloud data center on which the 1770 and 1702 inbound and outbound communication ports operate. [00214] [00214] With the source | P and the port addresses [00215] [00215] Figure 19 illustrates the processing of a 1900 response message that a SaaS machine (not shown) sends in response to the processing of the 1800 data message. In some embodiments, the 1900 response message may be identical to the data message original 1800, it can be a modified version of the original 1800 data message, or it can be a completely new data message. As shown, SaaS communication port 1815 sends message 1900 based on destination | P and port addresses 198.15.7.125 and 714, which are the source | P and port addresses of data message 1800 when that message arrives to the SaaS 1815 communication port. [00216] [00216] Message 1900 is received on a communication port (not shown) of the virtual network, and this communication port provides the data message to NAT 1710 mechanism that performed the last SNAT operation on message 1800 before this message was sent to the SaaS provider. Although in the example illustrated in Figure 19, data message 1900 is received on the same NAT 1710 engine that performed the last SNAT operation, this need not be the case with each deployment. [00217] [00217] The NAT 1710 mechanism (now acting as an incoming NAT mechanism) performs a DNAT (destination NAT) operation on the 1900 data message. This operation changes the external destination IP address 198.15.7.125 to an | P address of destination 198.15.4.33 which is used by the virtual network to route the 1900 data message through the public cloud routing fabric and between the virtual network components. Again, the IP address 198.15.4.33 can be a public or private IP address in some ways. [00218] [00218] As shown, the NAT 1712 engine (now acting as an outbound NAT engine) receives message 1900 after the NAT 1710 engine translates its destination | P address. The NAT 1712 engine then performs a second DNAT operation on this 1900 message, which replaces the destination IP and port addresses for [00219] [00219] In some embodiments, a virtual network provider uses the processes, systems and components described above to provide multiple virtual WANs to multiple different tenants (for example, multiple different corporate WANs for multiple companies) over multiple public clouds from the same or from other public cloud providers. Figure 20 shows an example showing M virtual enterprise WANs 2015 for M tenants from a virtual network provider that has network infrastructure (or infrastructures) and clusters of controllers 2010 in N public clouds 2005 from one or more public cloud providers. [00220] [00220] The virtual WAN of each tenant 2015 can cover all N public clouds 2005 or a subset of these public clouds. Each 2015 tenant's virtual WAN connects one or more 2020 branches, 2025 data centers, 2030 SaaS provider data centers and remote tenant devices. In some embodiments, each tenant's virtual WAN covers any public cloud 2005 that the VNP controller cluster deems necessary to efficiently route data messages between the tenant's different computing nodes 2020 to 2035. When selecting public clouds, in some modalities, the controller cluster also considers the public clouds that the lessee selects and / or the public clouds in which the lessee, or at least one lessee's SaaS provider, has one or more machines. [00221] [00221] Each tenant's virtual WAN 2015 allows remote 2035 devices (for example, mobile devices or remote computers) to avoid interacting with the tenant's WAN communication port at any tenant branch or data center to access a service SaaS provider (that is, to access a machine or cluster of SaaS provider machines). The tenant's virtual WAN, in some ways, allows remote devices to bypass WAN communication ports at branch offices and data centers by moving the functionality of these WAN communication ports (for example, WAN security communication ports) to a or more machines in the public clouds spread over the virtual WAN. [00222] [00222] For example, to allow a remote device to access the computing resources of the tenant or its SaaS provider services, in some modalities, a WAN communication port must apply firewall rules that control how the remote device can access the computer resources of the tenant or its SaaS provider services. To avoid tenant WAN communication ports or tenant data center, tenant firewall mechanisms 210 are placed on MFNs in the virtual network in one or more public clouds covered by the tenant's virtual WAN. [00223] [00223] Firewall mechanisms 210 on these MFNs perform the operations of the firewall service on data message flows to and from remote devices. When performing these operations on the virtual network deployed in one or more public clouds, the data message traffic associated with the lessee's remote devices does not need to be routed unnecessarily by the lessee's data centers or branches to receive firewall rule processing. This relieves traffic congestion at the tenants' data centers and branches and avoids consuming expensive inbound / outbound network bandwidth at these locations to process traffic that is not intended to calculate resources at those locations. It also helps to speed up the forwarding of data message traffic to and from remote devices, as this approach allows processing of intermediate firewall rules to occur on the virtual network as data message flows traverse their destinations (for example, example, inbound MFNs, outbound MFNs, or intermediate hop MFNs). [00224] [00224] In some embodiments, the firewall enforcement mechanism 210 (for example, firewall service VM) of an MFN receives firewall rules from the central controllers of VNP 160. A firewall rule in some modalities includes a rule identifier and an action. The rule identifier in some modalities includes one or more match values that must be compared to attributes of the data message, such as layer 2 attributes (for example, MAC addresses), layer 3 attributes (for example, five tuple identifiers , etc.), tenant ID, location ID (for example, office location ID, data center ID, remote user ID, etc.), to determine whether the firewall rule matches a data message. [00225] [00225] The action of the firewall rule in some modalities specifies the action (for example, allow, release, redirect, etc.) that the firewall application mechanism 210 should receive in a data message when the firewall rule matches the attributes of the data message. To address the possibility that multiple firewall rules correspond to a data message, firewall enforcement mechanism 210 stores the firewall rules (which it receives from controller cluster 160) in a firewall rules data store hierarchically, so that a firewall rule can have higher priority than another firewall rule. When a data message corresponds to two firewall rules, the firewall enforcement mechanism applies the rule with the highest priority in some ways. In other embodiments, the firewall enforcement mechanism examines the firewall rules according to their hierarchy (that is, it examines the highest priority rules before the lowest priority rules) to ensure that it matches the highest priority rule first. high, if another lower priority rule can also match the data message. [00226] [00226] Some modalities allow the controller cluster to configure the MFN components so that the firewall's service mechanisms examine a data message on an incoming node (for example, node 850) when it enters a virtual network, on a intermediate node (for example, node 857) in the virtual network or an outgoing node (for example, node 855) when leaving the virtual network. At each of these nodes, the CFE (for example, 832, 856 or 858) in some embodiments calls its associated firewall service mechanism 210 to perform the firewall service operation on the data message that the CFE receives. In some modalities, the firewall service mechanism returns its decision to the module that called it (for example, to the CFE), so that this module can perform the firewall action on the data message, while in other modalities, the service mechanism firewall performs its firewall action on the data message. [00227] [00227] In some embodiments, other components of the MFN direct the firewall service mechanism to perform its operation. For example, on an incoming node, the VPN communication port (for example, 225 or 230) in some ways directs its associated firewall service mechanism to perform its operation, in order to determine whether the data message should be passed for the CFE of the input node. Furthermore, in the outgoing node, the CFE, in some modalities, transmits the data message to the associated firewall service mechanism, which, if it decides to allow the data message to pass, passes the data message through an external network. (for example, the Internet) to its destination or passes the data message to its associated NAT mechanism 215 to perform its NAT operation before passing the data message to its destination over an external network. [00228] [00228] Virtual network providers of some modalities allow the tenant's WAN security communication port, defined in public clouds, to implement other security services in addition to, or instead of, firewall services. For example, a tenant's distributed WAN security communication port (which in some ways is distributed to each public cloud data center that is distributed over the tenant's virtual network) not only includes firewall service mechanisms, but also intrusion detection mechanisms and intrusion prevention mechanisms. In some embodiments, intrusion detection mechanisms and intrusion prevention mechanisms are architecturally incorporated into the MFN 150 to occupy a position similar to the firewall service mechanism 210. [00229] [00229] Each of these mechanisms in some modalities includes one or more stores that store intrusion detection / prevention policies distributed by the central controller cluster [00230] [00230] As mentioned above, the virtual network provider deploys each tenant's virtual WAN by deploying at least one MFN in each public cloud covered by the virtual WAN and configuring the deployed MFNs to define routes between the MFNs that allow message flows the tenant to enter and exit the virtual WAN. In addition, as mentioned above, each MFN can be shared by different tenants in some modalities, while in other modalities each MFN is implemented for only one particular tenant. [00231] [00231] In some modalities, each tenant's virtual WAN is a secure virtual WAN that is established by connecting the MFNs used by that WAN through overlapping tunnels. This overlapping encapsulation approach in some embodiments encapsulates the data message flows of each tenant with a unique encapsulation header for each tenant, for example, contains a tenant identifier that uniquely identifies the tenant. For a tenant, the CFEs of the virtual network provider, in some embodiments, use a tunnel header to identify inbound / outbound routing elements to enter / leave the tenant's virtual WAN and another tunnel header to traverse intermediate routing elements. of the virtual network. Virtual WAN CFEs use different overlapping encapsulation mechanisms in other modalities. [00232] [00232] To deploy a virtual WAN for a tenant in one or more public clouds, the VNP controller cluster (1) identifies possible edge MFNs (which can serve as incoming or outgoing MFNs for different data message flows) to the tenant based on the locations of the tenant's corporate computing nodes (for example, branch offices, data centers, mobile users, and SaaS providers) and (2) identify routes between all possible edge MFNs. Once these routes are identified, they are propagated to the CFE routing tables (for example, propagated using OpenFlow to different OVS-based virtual network routers). Specifically, to identify optimal routes through a tenant's virtual WAN, MFNs associated with that WAN generate measurement values that quantify the quality of the network connection between them and neighboring MFNs and provide their measurements regularly to the VNP controller cluster . [00233] [00233] As mentioned above, the controller cluster aggregates the measurements of the different MFNs, generates routing graphs based on these measurements, defines routes through a tenant's virtual WAN and distributes these routes to the routing elements of the MFNs CFEs. To dynamically update the routes defined for a tenant's virtual WAN, the MFNs associated with that WAN periodically generate their measurements and provide them to the controller cluster, which periodically repeats their measurement aggregation, generation of route graphs, route identification and route distribution. based on the updated measurements you receive. [00234] [00234] When defining routes over a tenant's virtual WAN, the VNP controller cluster optimizes routes to achieve the desired end-to-end performance, reliability and security, while trying to minimize the routing of the tenant's message flows over the Internet . The controller cluster also configures MFN components to optimize layer 4 processing of data message flows that pass through the network (for example, to optimize the end-to-end rate of TCP connections by dividing the rate control mechanisms in the connection path). [00235] [00235] With the proliferation of public clouds, it is generally very easy to find a large public cloud data center close to each branch of a company. Similarly, service providers [00236] [00236] Enterprise WANs require bandwidth guarantees to provide critical business applications with acceptable performance at all times. Such applications, perhaps interactive data applications, for example, ERP, financial or purchasing, time-oriented application (for example, industrial control or loT), real-time application (for example, VolP or video conferencing) Consequently, the traditional infrastructure WAN (for example, Frame Relay or MPLS) provides these guarantees. [00237] [00237] A major obstacle in providing guaranteed bandwidth on a multi-tenant network is the need to reserve bandwidth through one or more paths for a given customer. In some modalities, VNP offers QoS services and provides a compromised entry rate (ICR) guarantee and a compromised outgoing rate (ECR) guarantee. ICR refers to the rate of traffic entering the virtual network, while ECR refers to the rate of traffic leaving the virtual network for the lessee's website. [00238] [00238] As long as the traffic does not exceed the ICR and ECR limits, the virtual network in some modalities provides guarantees of bandwidth and delay. For example, as long as incoming or outgoing HTTP traffic does not exceed 1 Mbps, bandwidth and low delay are guaranteed. This is the point-to-cloud model because, for QoS purposes, VNP does not need to track traffic destinations, as long as their destinations are within the ICR / ECR limits. This model is sometimes called a hose model. [00239] [00239] For more rigorous applications, in which a customer wants a point-to-point guarantee, it is necessary to build a virtual data channel to provide highly critical traffic. For example, a company may want two hub sites or data centers connected with high-level contract guarantees. To that end, VNP routing automatically chooses a routing path that meets the bandwidth constraint for each client. This is called a point-to-point model or a pipe model. [00240] [00240] The main advantage of VNP in providing guaranteed bandwidth to end users is the ability to adjust the VNP infrastructure according to the changing demands of bandwidth. Most public clouds provide minimal bandwidth guarantees between every two instances located in different regions of the same cloud. If the current network does not have enough spare capacity to provide guaranteed bandwidth for a new request, VNP will add new features to your facility. For example, VNP can add new CFEs in high-demand regions. [00241] [00241] One challenge is to optimize the performance and cost of this new dimension in route planning and in the expansion and reduction of infrastructure. To facilitate bandwidth accounting and algorithms, some modalities assume that end-to-end bandwidth reserves are not split. In other ways, if a certain bandwidth (for example, 10 Mbps) is reserved between branch A and branch B of a given tenant, the bandwidth will be allocated on a single path that begins at an incoming CFE to which branch A connects and then traverses a set of zero or more intermediate CFEs to reach the outgoing CFE connected to branch B. Some modalities also assume that the path guaranteed by bandwidth crosses only a single public cloud. [00242] [00242] To account for the various bandwidth reserves that intersect in the network topology, VNP in some modalities statically defines routing through a reserved bandwidth path, so that data message flows always cross the same routes that were reserved for bandwidth requirements. In some modalities, each route is identified with a unique label that each CFE crossed by the route corresponds to a single exit interface associated with this route. Specifically, each CFE corresponds to a single output interface for each data message that has this label in the header and arrives from a specific input interface. [00243] [00243] In some modalities, the controller cluster maintains a network graph formed by several interconnected nodes. Each node n in the graph has the total guaranteed bandwidth allocated (TBWn) associated with that node and the amount of bandwidth already reserved (allocated for a given reserved path) by this node (RBWhn). In addition, for each node, the graph includes the cost in cents per gigabyte (C;) and the delay in milliseconds (Di) associated with sending traffic between that node and all other nodes on the graph. The weight associated with sending traffic between node i and node jé Wi = a * Ci + Di, where a is a system parameter that is normally between 1 and 10. [00244] [00244] When a bandwidth reservation request of the BW value between branches A and B is accepted, the controller cluster first maps the request to specific input and output routers nor, which are linked to branches A and B, respectively. The controller cluster performs a routing process that conducts two lower cost calculations (for example, shortest path) between ne m. The first is the least cost route (for example, shortest path) between n and m, regardless of the available bandwidth along the calculated route. The total weight of this route is calculated as Wa. [00245] [00245] The second lowest cost computation (for example, shortest path) initially modifies the graph, eliminating all nodes ij where BW> TBWi - RBW ;. The modified graph is called a trimmed graph. The controller cluster then performs a second least cost route calculation (for example, shortest path) on the cut graph. If the weight of the second route is not more than K% (K is normally 10% to 30%) greater than the first, the second route will be selected as the preferred route. On the other hand, when this requirement is not met, the controller cluster will add the node | with the lowest TBW value; - RBW; and repeat the two lowest cost calculations (for example, shortest path). The controller cluster will continue to add more routers until the condition is met. At that point, the reserved BW bandwidth is added to all RBWi where i is a router on the selected route. [00246] [00246] For the special case of an additional bandwidth request for a route that already has reserved bandwidth, the controller cluster will first delete the current bandwidth reservation between nodes A and B and calculate the path for the total bandwidth request between these nodes. To do this, the information maintained for each node in some modalities also includes the bandwidth reserved for each tag or each branch of origin and destination, and not just the general reserved bandwidth. After the bandwidth reserves are added to the network, some modalities do not revisit the routes, as long as there are no major changes in network delays or measured costs over the virtual network. However, when measurements and / or costs change, these modalities repeat the bandwidth reservation and route computation processes. [00247] [00247] Figure 21 conceptually illustrates a 2100 process performed by controller cluster 160 of the virtual network provider to deploy and manage a virtual WAN for a specific tenant. In some embodiments, process 2100 is performed by several different controller programs running on controller cluster 160. The operations of this process do not necessarily have to follow the sequence shown in Figure 21, as these operations can be performed by different programs in parallel or in parallel. a different sequence. Consequently, these operations are illustrated in this figure only to describe an exemplary sequence of operations performed by the controller cluster. [00248] [00248] As shown, the controller cluster initially deploys (in 2105) several MFNs in various public cloud data centers from several different public cloud providers (for example, Amazon AWS, Google GCP, etc.). The controller cluster in some embodiments configures (in 2105) these MFNs deployed for one or more tenants other than the specific tenant for which the 2100 process is illustrated. [00249] [00249] In 2110, the controller cluster receives from the specific tenant data about attributes of external machines and locations of the specific tenant. In some embodiments, this data includes the private subnets used by the particular tenant, as well as identifiers for one or more tenants' offices and data centers in which the particular tenant has external machines. In some embodiments, the controller cluster can receive data from the tenant through APIs or through a user interface that the controller cluster provides. [00250] [00250] Then, in 2115, the controller cluster generates a routing chart for the specific tenant from measurements collected by measurement agents 205 from MFNs 150 which are candidate MFNs to be used to establish the virtual network for the tenant specific. As mentioned above, the routing graph has nodes that represent MFNs and links between nodes that represent network connections between MFNs. Links have associated weights, which are cost values that quantify the quality and / or cost of using the network connections represented by the links. As mentioned above, the controller cluster first generates a measurement graph from the collected measurements and then generates the routing graph by removing links from the measurement graph that are not ideal (for example, that have long delays or rates of discard). [00251] [00251] After building the routing chart, the controller cluster performs (in 2120) path searches to identify possible routes between different pairs of candidate input and output nodes (ie MFNs) that the tenant's external machines can use to send data messages to the virtual network (implemented by MFNs) and to receive data messages from the virtual network. In some embodiments, the controller cluster uses known path search algorithms to identify different paths between each candidate input / output node pair. Each path for this pair uses one or more links that, when concatenated, cross from the input node to the output node through zero or more intermediate nodes. [00252] [00252] In some modalities, the cost between two MFNs comprises a weighted sum of the estimated latency and financial costs for a connection link between the two MFNs. Financial and latency costs in some modalities include one or more of the following: (1) link delay measurements, (2) estimated message processing latency, (3) cloud charges for outbound traffic from a given center data to another data center from the same public cloud provider or to leave the cloud of the public cloud provider (PC) (for example, to another public cloud data center from another public cloud provider or to the Internet) and (4) estimated message processing costs associated with MFNs running on host computers in public clouds. [00253] [00253] Some modalities evaluate a penalty for connection links between two MFNs that cross the public Internet, in order to minimize this crossing whenever possible. Some modalities also encourage the use of private network connections between two data centers (for example, reducing the cost of the connection link), in order to influence the generation of routes for the use of such connections. Using the calculated costs of these pair links, the controller cluster can calculate the cost of each routing path that uses one or more of these pair pairs, aggregating the costs of the individual pair links used by the routing path. [00254] [00254] The controller cluster selects (in 2120) one or even N identified paths (where N is an integer greater than 1) based on the calculated costs (for example, the lowest aggregate cost) of the candidate paths identified between each pair candidate entry / exit nodes. In some modalities, the costs calculated for each path are based on the weight cost of each link used by the path (for example, it is a sum of the weight value associated with each link), as mentioned above. The controller cluster can select more than one path between a pair of input / output nodes when more than one route is required between two MFNs to allow the input MFN or an intermediate MFN to perform a multipath operation. [00255] [00255] After selecting (in 2120) one or N paths for each candidate input / output node pair, the controller cluster defines one or N routes based on the selected paths and then generates route tables or portions of route table for MFNs implementing the private tenant's virtual network. The generated route logs identify edge MFNs to reach subnets other than the specific tenant and identify next hop MFNs to traverse routes from incoming MFNs to outgoing MFNs. [00256] [00256] In 2125, the controller cluster distributes route records to the MFNs, in order to configure the routing elements 235 of these MFNs to implement the virtual network for the specific tenant. In some embodiments, the controller cluster communicates with routing elements to pass route logs using communication protocols currently used in a software-defined multi-tenant data center to configure software routers running on host computers to implement a network logic that encompasses all host computers. [00257] [00257] After MFNs are configured and the virtual network is operational for the specific tenant, edge MFNs receive data messages from the lessee's external machines (ie machines outside the virtual network) and forward these data messages to edge MFNs on the virtual network, which in turn forwards data messages to other external machines on the tenant. When performing these forwarding operations, the incoming, intermediary and outgoing MFNs collect statistics about their forwarding operations. In addition, in some modalities, one or more modules in each MFN in some modalities collect other statistics about the network or calculate consumption in public cloud data centers. In some modalities, public cloud providers collect this consumption data and transmit the collected data to the virtual network provider. [00258] [00258] Upon approaching a billing cycle, the controller cluster collects (for example, in 2130) statistics collected by MFNs and / or network / computing consumption data collected by MFNs or provided by public cloud providers. Based on the collected statistics and / or the network / computing consumption data provided, the controller cluster generates (in 2130) billing reports and sends billing reports to the specific tenant. [00259] [00259] As mentioned above, the amount charged in the billing report is responsible for network and consumption data and statistics that the controller cluster receives (for example, at 2130). In addition, in some modalities, the invoice accounts for the cost that the virtual network provider incurred to operate MFNs (which implement the virtual network for the specific tenant) plus a rate of return (for example, a 10% increase). This billing scheme is convenient for the particular tenant, as he does not have to deal with invoices from several different public cloud providers in which the tenant's virtual network is deployed. The cost incurred for VÚNP in some modalities includes the cost charged for VNP by public cloud providers. In 2130, the controller cluster also charges a credit card or electronically withdraws funds from a bank account for the charges reflected in the billing report. [00260] [00260] In 2135, the controller cluster determines whether it received new measurements from 205 measuring agents. Otherwise, the process will be transferred to 2145, which will be described below. On the other hand, when the controller cluster determines that it has received new measurements from the measurement agents, it determines (in 2140) whether it needs to reexamine its routing graph for the specific tenant based on the new measurements. In the absence of an MFN failure, the controller cluster in some modalities, at most, updates its routing graph for each tenant once during a specific period of time (for example, once every 24 hours or every week) with based on received and updated measurements. [00261] [00261] When the controller cluster determines (in 2140) that it needs to re-examine the routing graph based on the new measurements received, the process generates (in 2145) a new measurement graph based on the newly received measurements. In some embodiments, the controller cluster uses a weighted sum to mix each new measurement with the previous measurements, to ensure that the measurement values associated with the measurement graph links do not fluctuate dramatically each time a new set of measurements is made. Received. [00262] [00262] In 2145, the controller cluster also determines whether it is necessary to adjust the routing graph based on the adjusted measurement graph (for example, whether it is necessary to adjust the weight values for the routing graph links or to add or remove links in the routing graph due to adjusted measurement values associated with the links). In this case, the controller cluster (at 2145) adjusts the routing graph, performs path search operations (such as 2120 operations) to identify routes between pairs of input / output nodes, generates route records based on the identified routes and distributes route logs to MFNs. From 2145, the process transitions to 2150. [00263] [00263] The process also transitions to 2150 when the controller cluster determines (in 2140) that it does not need to reexamine the routing graph. In 2150, the controller cluster determines whether it is approaching another billing cycle for which it must collect statistics on processed data messages and consumed network / computing resources. Otherwise, the process will return to 2135 to determine whether it has received new measurements from the MFN measurement agents. Otherwise, the process will return to 2130 to collect statistics, network / computing consumption data and generate and send billing reports. In some embodiments, the controller cluster repeatedly performs the operations of the 2100 process until the particular tenant no longer needs a virtual network that is deployed in public cloud data centers. [00264] [00264] In some embodiments, the controller cluster not only implements virtual networks for tenants in public cloud data centers, but also assists tenants in the deployment and configuration of computing node machines and service machines in data centers. public cloud. Deployed service machines can be separated from MFN service machines. In some embodiments, the controller cluster billing report for the specific tenant also accounts for the computing resources consumed by the computing and service machines deployed. Again, having an invoice from a virtual network provider for network and computing resources consumed in multiple public cloud data centers from multiple public cloud providers is more preferable to the tenant than receiving multiple invoices from multiple public cloud providers. [00265] [00265] Many of the features and applications described above are implemented as software processes specified as a set of instructions recorded on computer-readable storage media (also called computer-readable media). When these instructions are executed by one or more processing units (for example, one or more processors, processor cores or other processing units), they cause the processing units to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. Computer-readable media does not include carrier waves and electronic signals going wireless or over wired connections. [00266] [00266] In this specification, the term "software" must include firmware resident in read-only memory or applications stored in magnetic storage, which can be read in memory for processing by a processor. In addition, in some embodiments, several software inventions can be implemented as subparts of a larger program, with separate software inventions remaining. In some embodiments, several software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, software programs, when installed to operate on one or more electronic systems, define one or more implementations of specific machines that execute and perform the operations of the software programs. [00267] [00267] Figure 22 conceptually illustrates a computer system 2200 with which some modalities of the invention are implemented. The 2200 computer system can be used to implement any of the hosts, controllers and managers described above. As such, it can be used to perform any of the processes described above. This computer system includes several types of non-transitory machine-readable media and interfaces to several other types of machine-readable media. The computer system 2200 includes a bus 2205, processing unit (or units) 2210, a system memory 2225, a read-only memory 2230, a permanent storage device 2235, input devices 2240 and output devices 2245. [00268] [00268] The 2205 bus collectively represents all system, peripheral and chipset buses that connect the various internal devices of the computer system communicatively [00269] [00269] From these various memory units, the processing unit (or units) 2210 retrieves instructions to execute and data to process in order to execute the processes of the invention. The processing unit (or units) can be a single processor or a multi-core processor in different modalities. The read-only memory (ROM) 2230 stores static data and instructions required for the 2210 processing unit (or units) and other computer system modules. The 2235 permanent storage device, on the other hand, is a read and write memory device. This device is a non-volatile memory unit that stores instructions and data even when the 2200 computer system is turned off. Some embodiments of the invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2235. [00270] [00270] Other modalities use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 2235, the system memory 2225 is a read and write memory device. However, unlike storage device 2235, the system memory is a volatile read and write memory, a random access memory. System memory stores some of the instructions and data that the processor needs at run time. In some embodiments, the processes of the invention are stored in system memory 2225, in permanent storage device 2235 and / or in read-only memory 2230. From these various memory units, processing unit (or units) 2210 retrieves instructions for executing and data to process in order to execute the processes of some modalities. [00271] [00271] Bus 2205 also connects to input and output devices 2240 and 2245. Input devices allow the user to communicate information and select commands for the computer system. The 2240 input devices include alphanumeric keyboards and pointing devices (also called "cursor control devices"). The 2245 output devices display images generated by the computer system. Output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some modalities include devices such as a touchscreen that act as input and output devices. [00272] [00272] Finally, as shown in Figure 22, bus 2205 also couples computer system 2200 to a network 2265 through a network adapter (not shown). In this way, the computer can be part of a computer network (such as a local area network ("LAN"), a wide area network ("WAN") or an Intranet or network of networks, such as the Internet). Any or all of the components of the 2200 computer system can be used in conjunction with the invention. [00273] [00273] Some modalities include electronic components, such as microprocessors, storage and memory that store computer program instructions in a computer-readable or computer-readable medium (also known as computer-readable storage media, machine-readable media or computer media). machine-readable storage). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compressible discs (CD-RW), read-only digital versatile discs (for example, DVD -ROM, double-layer DVD-ROM), a variety of recordable / rewritable DVDs (for example, DVD-RAM, DVD-RW, DVD + RW, etc.), flash memory (for example, SD cards, mini cards SD, micro-SD cards, etc.), magnetic and / or solid-state hard drives, read-only and writable Blu-Ray discs, ultra-density optical discs, any other optical or magnetic media and floppy disks. Computer-readable media can store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as that produced by a compiler, and files that include higher-level code that are executed by a computer, an electronic component or a microprocessor using an interpreter. [00274] [00274] Although the discussion above refers mainly to the microprocessor or multi-core processors that run software, some modalities are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field programmable port arrays (FPGAs). In some embodiments, these integrated circuits carry out instructions that are stored in the circuit itself. [00275] [00275] As used in this specification, the terms "computer", "server", "processor" and "memory" refer to electronic devices or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or display means display on an electronic device. As used in this specification, the terms "computer-readable medium", "computer-readable media" and "machine-readable media" are entirely restricted to tangible physical objects that store information in a computer-readable format. These terms exclude any signs wireless, wired download signals, and any other ephemeral or transient signals. [00276] [00276] Although the invention has been described in detail with reference to the modalities, those skilled in the art may understand that various alternatives and modifications can be carried out without departing from the spirit and scope of the invention. For example, several of the examples described above illustrate virtual corporate WANs of corporate tenants from a virtual network provider. A person skilled in the art will find that, in some embodiments, the virtual network provider deploys virtual networks in multiple public cloud data centers from one or more public cloud providers to non-corporate tenants (for example, to schools, colleges, universities, nonprofits) profit, etc.). These virtual networks are virtual WANSs that connect multiple computing endpoints (for example, offices, data centers, remote user computers and devices, etc.) from non-corporate entities. [00277] [00277] Various modalities described above include several pieces of data in the overlay encapsulation headers. One skilled in the art will realize that other modalities may not use encapsulation headers to relay all of this data. For example, instead of including the tenant identifier in the overlay encapsulation header, other modalities derive the tenant identifier from the addresses of the CFEs that forward the data messages, for example, in some modalities in which different tenants have their MFNs themselves deployed in public clouds, the tenant's identity is associated with the MFNs that process the tenant's messages. [00278] [00278] In addition, several figures conceptually illustrate the processes of some embodiments of the invention. In other embodiments, the operations specific to these processes may not be performed in the exact order shown and described in these figures. Specific operations may not be performed in a continuous series of operations and different specific operations may be performed in different modalities. In addition, the process can be implemented using several sub-processes or as part of a larger macro-process. Thus, one skilled in the art would understand that the invention should not be limited by the previous illustrative details, but should be defined by the appended claims.
权利要求:
Claims (34) [1] 1. Method for establishing virtual networks through a plurality of public cloud data centers, the method being characterized by the fact that it comprises: configuring a first set of routing elements in the first and second cloud data centers multi-tenant public network to implement a first virtual wide area network (WAN) for a first entity, the said first virtual WAN connecting a plurality of machines operating on a set of two or more machine sites of the first entity; and configure a second set of routing elements in the first and second multi-tenant public cloud data centers to implement a second virtual wide area network for a second entity, the said second virtual WAN connecting a plurality of machines operating on one set of two or more second-party machine locations. [2] 2. Method, according to claim 1, characterized by the fact that the set of machine locations of the first entity includes two or more office locations. [3] 3. Method according to claim 2, characterized by the fact that the set of machine locations of the first entity still includes at least one data center location. [4] 4. “Method, according to claim 3, characterized by the fact that the set of machine locations of the first entity also includes remote device locations. [5] 5. Method according to claim 1, characterized by the fact that the set of machine locations of the first entity includes an office location and a data center location. [6] 6. Method, according to claim 5, characterized by the fact that the set of machine locations of the first entity also includes a location that comprises a plurality of machines from a SaaS (Software as a Service) provider. [7] 7. “Method, according to claim 1, characterized by the fact that the machines include at least one among virtual machines, containers or autonomous computers. [8] 8. “Method, according to claim 1, characterized by the fact that at least a subset of routing elements in the first set of routing elements is also in the second set of routing elements. [9] Method according to claim 8, characterized by the fact that at least one other subset of routing elements in the first set of routing elements is not in the second set of routing elements. [10] 10. Method according to claim 1, characterized by the fact that configuring the first set of routing elements comprises configuring the first set of routing elements to use a first set of overlapping virtual WAN headers to encapsulate data messages exchanged between the machines of the first entity in different machine locations; and configuring the second set of routing elements comprises configuring the second set of routing elements to use a second set of overlapping virtual WAN headers to encapsulate the data messages exchanged between the second entity machines at different machine locations; wherein the first set of virtual overlay WAN headers stores a first entity identifier that identifies the first entity, and where the second set of virtual overlay WAN headers stores a second entity identifier that identifies the second entity. [11] 11. Method according to claim 10, characterized by the fact that the first and second sets of routing elements overlap so that at least one routing element is in both sets of routing elements. [12] 12. Method according to claim 1, characterized by the fact that the configuration of the first and second sets of routing elements is performed by a set of one or more controllers from a virtual network provider that implements different virtual WANs for different entities in public cloud data centers from different public cloud providers and in different regions. [13] 13. Method according to claim 1, characterized by the fact that the first and second sets of routing elements comprise a plurality of software routing elements running on computers. [14] 14. Method according to claim 1, characterized in that the first and second sets of routing elements comprise a plurality of software routing elements running on host computers in the data centers. [15] 15. Method, according to claim 14, characterized by the fact that the plurality of software routing elements are machines running on host computers. [16] 16. Method according to claim 15, characterized by the fact that at least a subset of machines that implement the plurality of software routing elements are running on host computers together with other machines. [17] 17. Method, according to claim 15, characterized by the fact that at least a subset of machines that implement the plurality of software routing elements are virtual machines. [18] 18. “Method for forwarding data message flow through at least two public cloud data centers from at least two different public cloud providers, the method being characterized by the fact that it comprises: in a forwarding element of entering a first public cloud data center, receiving, from a first external machine outside the public cloud data centers, a data message addressed to a second external machine outside the public cloud data centers, where said second external machine is accessible via an outbound routing element that is in a second public cloud data center; encapsulate the data message with a first header that includes network addresses for the incoming and outgoing routing elements as source and destination addresses; and encapsulate the data message with a second header that specifies the source and destination network addresses as the network address of the incoming forwarding element and a network address of a next hop forwarding element that is in a call center. public cloud data and that is the next hop on a path to the outbound forwarding element. [19] 19. Method, according to claim 18, characterized by the fact that the next hop forwarding element is in a third public cloud data center. [20] 20. Method, according to claim 19, characterized by the fact that the first, second and third public cloud data centers belong to three different public cloud providers. [21] 21. Method according to claim 19, characterized by the fact that the first and second public cloud data centers belong to a first public cloud provider, while the third public cloud data center belongs to a second provider different public cloud. [22] 22. Method according to claim 19, characterized by the fact that the first and second public cloud data centers belong to two different public cloud providers, while the third public cloud data center belongs to the cloud provider of the first public cloud data center or the second public cloud data center. [23] 23. Method according to claim 19, characterized by the fact that the next hop forward element is a first next hop forward element, and the first next hop forward element identifies a second next hop element hop along the path as a next hop for the data message and, in the second header, specify the source and destination network addresses as the network addresses of the first next hop forward element and the second next forward element jump. [24] 24. Method according to claim 23, characterized by the fact that the second next-hop routing element is the outgoing routing element. [25] 25. Method according to claim 24, characterized in that, after receiving the encapsulated data message, the outgoing routing element determines, from the destination network address in the first header, that the data message encapsulated is addressed to the outgoing routing element, removes the first and second headers from the data message and forwards the data message to the second external machine. [26] 26. Method according to claim 23, characterized by the fact that the second next-hop routing element is a fourth routing element that is different from the second routing element. [27] 27. Method, according to claim 18, characterized by the fact that the next hop forwarding element is the second forwarding element. [28] 28. Method, according to claim 18, characterized by the fact that it further comprises: processing, in the input and output routing elements, data messages belonging to different tenants of a virtual network provider that defines different virtual networks in centers public cloud data for different tenants; in the first encapsulation header of the received message, store a tenant identifier that identifies the tenant associated with the first and second external machines. [29] 29. Method, according to claim 28, characterized by the fact that the encapsulation of the data message with the first and the second headers defines, for the first tenant, a virtual network of overlap that comprises a group of networks of a center group public cloud data, including the first and second public cloud data centers. [30] 30. Method, according to claim 29, characterized by the fact that tenants are companies and virtual networks are corporate wide area networks (WANs). [31] 31. Method according to claim 18, characterized in that the first external machine is one among a machine in a first branch office, a machine in a first private data center or a remote machine, and the second machine External is a machine in a second branch office or a machine in a second private data center. [32] 32. Machine-readable media characterized by the fact that it stores a program that, when executed by at least one of the processing units, implements the method, according to any one of claims 1 to 17 and 18 to 31. [33] 33. Electronic device characterized by the fact that it comprises: a set of processing units; and a computer-readable medium that stores a program that, when executed by at least one of the processing units, implements the method, according to any one of claims 1 to 17 and 18 to 31. [34] 34. System characterized by the fact that it comprises means to implement the method, according to any one of claims 1 to 17 and 18 to 31.
类似技术:
公开号 | 公开日 | 专利标题 BR112020006724A2|2020-10-06|creation of virtual networks spanning multiple public clouds US10959098B2|2021-03-23|Dynamically specifying multiple public cloud edge nodes to connect to an external multi-computer node US10999100B2|2021-05-04|Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider US10999165B2|2021-05-04|Three tiers of SaaS providers for deploying compute and network infrastructure in the public cloud US11258728B2|2022-02-22|Providing measurements of public cloud connections KR102368063B1|2022-03-02|Creation of virtual networks spanning multiple public clouds JP2022043118A|2022-03-15|Creating a virtual network that spans multiple public clouds KR20220028172A|2022-03-08|Creating virtual networks spanning multiple public clouds RU2766313C2|2022-03-15|Creation of virtual networks covering many public clouds
同族专利:
公开号 | 公开日 US20190104049A1|2019-04-04| US10805114B2|2020-10-13| US20190103991A1|2019-04-04| EP3662619A1|2020-06-10| US20190104052A1|2019-04-04| JP6991646B2|2022-01-12| US10608844B2|2020-03-31| US10666460B2|2020-05-26| CA3074501A1|2019-04-11| US20190104109A1|2019-04-04| AU2021221592A1|2021-09-16| RU2020118757A|2021-11-01| RU2020118757A3|2021-11-01| KR20200064102A|2020-06-05| US11005684B2|2021-05-11| US20190103992A1|2019-04-04| US20190104051A1|2019-04-04| CN111095876A|2020-05-01| US20190104111A1|2019-04-04| WO2019070611A1|2019-04-11| US10594516B2|2020-03-17| US20190104063A1|2019-04-04| US10841131B2|2020-11-17| US10958479B2|2021-03-23| AU2018345729A1|2020-03-12| US11102032B2|2021-08-24| AU2018345729B2|2021-06-03| US20190104053A1|2019-04-04| US20190104064A1|2019-04-04| US10778466B2|2020-09-15| US20190104050A1|2019-04-04| US20190103990A1|2019-04-04| US10686625B2|2020-06-16| JP2020536403A|2020-12-10|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US5652751A|1996-03-26|1997-07-29|Hazeltine Corporation|Architecture for mobile radio networks with dynamically changing topology using virtual subnets| JP2964957B2|1996-08-15|1999-10-18|日本電気株式会社|High-speed routing control method| US5909553A|1997-02-19|1999-06-01|International Business Machines Corporation|Systems and methods for controlling the transmission of relatively large data objects in a communications system| US6157648A|1997-03-06|2000-12-05|Bell Atlantic Network Services, Inc.|Network session management| AU6385699A|1998-09-11|2000-04-03|Sharewave, Inc.|Method and apparatus for controlling communication within a computer network| US6154465A|1998-10-06|2000-11-28|Vertical Networks, Inc.|Systems and methods for multiple mode voice and data communications using intelligenty bridged TDM and packet buses and methods for performing telephony and data functions using the same| US6445682B1|1998-10-06|2002-09-03|Vertical Networks, Inc.|Systems and methods for multiple mode voice and data communications using intelligently bridged TDM and packet buses and methods for performing telephony and data functions using the same| US6363378B1|1998-10-13|2002-03-26|Oracle Corporation|Ranking of query feedback terms in an information retrieval system| US6930983B2|2000-03-15|2005-08-16|Texas Instruments Incorporated|Integrated circuits, systems, apparatus, packets and processes utilizing path diversity for media over packet applications| US6744775B1|1999-09-27|2004-06-01|Nortel Networks Limited|State information and routing table updates in large scale data networks| US7536715B2|2001-05-25|2009-05-19|Secure Computing Corporation|Distributed firewall system and method| EP1758311B1|2000-06-16|2009-08-19|Fujitsu Limited|Communication device including VPN accomodation function| US7003481B2|2000-08-25|2006-02-21|Flatrock Ii, Inc.|Method and apparatus for providing network dependent application services| US7320017B1|2000-09-06|2008-01-15|Cisco Technology, Inc.|Media gateway adapter| US6976087B1|2000-11-24|2005-12-13|Redback Networks Inc.|Service provisioning methods and apparatus| US20030112808A1|2001-12-13|2003-06-19|Net Reality Ltd|Automatic configuration of IP tunnels| WO2003073701A1|2002-02-22|2003-09-04|The Trustees Of The University Of Pennsylvania|System and method for distributing traffic in a network| JP2003258854A|2002-02-27|2003-09-12|Toshiba Corp|Router device and method for selecting internet service provider| US7289456B2|2002-04-08|2007-10-30|Telcordia Technologies, Inc.|Determining and provisioning paths within a network of communication elements| US7280476B2|2002-06-04|2007-10-09|Lucent Technologies Inc.|Traffic control at a network node| US7086061B1|2002-08-01|2006-08-01|Foundry Networks, Inc.|Statistical tracking of global server load balancing for selecting the best network address from ordered list of network addresses based on a set of performance metrics| US8656050B2|2002-09-24|2014-02-18|Alcatel Lucent|Methods and systems for efficiently configuring IP-based, virtual private networks| US7440573B2|2002-10-08|2008-10-21|Broadcom Corporation|Enterprise wireless local area network switching system| US7386605B2|2002-11-05|2008-06-10|Enterasys Networks, Inc.|Methods and apparatus for automated edge device configuration in a heterogeneous network| US7584298B2|2002-12-13|2009-09-01|Internap Network Services Corporation|Topology aware route control| US7397795B2|2003-02-24|2008-07-08|Intel California|Method and system for label-based packet forwarding among multiple forwarding elements| JP4354201B2|2003-03-18|2009-10-28|富士通株式会社|Unauthorized access countermeasure system and unauthorized access countermeasure processing program| US7452278B2|2003-05-09|2008-11-18|Microsoft Corporation|Web access to secure data| US7606156B2|2003-10-14|2009-10-20|Delangis Eric M|Residential communications gateway for broadband communications over a plurality of standard POTS lines, with dynamic allocation of said bandwidth, that requires no additional equipment or modifications to the associated class 5 offices or the PSTN at large| US7313629B1|2003-11-06|2007-12-25|Sprint Communications Company L.P.|Method for altering link weights in a communication network within network parameters to provide traffic information for improved forecasting| JP4398263B2|2004-01-13|2010-01-13|富士通株式会社|Route design method| US7246256B2|2004-01-20|2007-07-17|International Business Machines Corporation|Managing failover of J2EE compliant middleware in a high availability system| US8145785B1|2004-02-13|2012-03-27|Habanero Holdings, Inc.|Unused resource recognition in real time for provisioning and management of fabric-backplane enterprise servers| US7376122B2|2004-02-23|2008-05-20|Microsoft Corporation|System and method for link quality source routing| US7957266B2|2004-05-28|2011-06-07|Alcatel-Lucent Usa Inc.|Efficient and robust routing independent of traffic pattern variability| US7561534B2|2004-06-30|2009-07-14|Alcatel-Lucent Usa Inc.|Methods of network routing having improved resistance to faults affecting groups of links subject to common risks| US8572234B2|2004-11-30|2013-10-29|Hewlett-Packard Development, L.P.|MPLS VPN fault management using IGP monitoring system| US20080049621A1|2004-12-31|2008-02-28|Mcguire Alan|Connection-Oriented Communications Scheme For Connection-Less Communications Traffic| US20060171365A1|2005-02-02|2006-08-03|Utstarcom, Inc.|Method and apparatus for L2TP dialout and tunnel switching| US7609619B2|2005-02-25|2009-10-27|Cisco Technology, Inc.|Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution| US7710865B2|2005-02-25|2010-05-04|Cisco Technology, Inc.|Disaster recovery for active-standby data center using route health and BGP| CN1909501A|2005-08-05|2007-02-07|华为技术有限公司|Method for end to end service rapid convergence and route device| US8259566B2|2005-09-20|2012-09-04|Qualcomm Incorporated|Adaptive quality of service policy for dynamic networks| US7920572B2|2005-09-20|2011-04-05|Cisco Technology, Inc.|Modifying operation of peer-to-peer networks based on integrating network routing information| US20070091794A1|2005-10-20|2007-04-26|Clarence Filsfils|Method of constructing a backup path in an autonomous system| US7693047B2|2005-11-28|2010-04-06|Cisco Technology, Inc.|System and method for PE-node protection| US7673042B2|2005-12-06|2010-03-02|Shunra Software, Ltd.|System and method for comparing service levels to a service level objective| US8141156B1|2005-12-28|2012-03-20|At&T Intellectual Property Ii, L.P.|Method and apparatus for mitigating routing misbehavior in a network| US7581022B1|2005-12-29|2009-08-25|At&T Corp.|Method for tunable inter domain egress selection| US7633956B1|2006-01-19|2009-12-15|Cisco Technology, Inc.|System and method for providing support for multipoint L2VPN services in devices without local bridging| US7680925B2|2006-01-24|2010-03-16|Cisco Technology, Inc.|Method and system for testing provisioned services in a network| US7633882B2|2006-02-02|2009-12-15|Eaton Corporation|Ad-hoc network and method employing globally optimized routes for packets| US7729257B2|2006-03-30|2010-06-01|Alcatel-Lucent Usa Inc.|Method and apparatus for link transmission scheduling for handling traffic variation in wireless mesh networks| US7865615B2|2006-05-08|2011-01-04|Cisco Technology, Inc.|Maintaining IGP transparency of VPN routes when BGP is used as a PE-CE protocol| US7953020B2|2006-05-22|2011-05-31|At&T Intellectual Property Ii, L.P.|Method for implementing and reporting one-way network measurements| US8488447B2|2006-06-30|2013-07-16|Centurylink Intellectual Property Llc|System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance| DE102006032832A1|2006-07-14|2008-01-17|Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V.|Network system and method for controlling distributed memory| US8566452B1|2006-08-03|2013-10-22|F5 Networks, Inc.|Intelligent HTTP based load-balancing, persistence, and application traffic management of SSL VPN tunnels| US8111692B2|2007-05-31|2012-02-07|Embarq Holdings Company Llc|System and method for modifying network traffic| US7907595B2|2006-09-29|2011-03-15|Avaya, Inc.|Method and apparatus for learning endpoint addresses of IPSec VPN tunnels| KR100817798B1|2006-10-20|2008-03-31|한국정보보호진흥원|A method for estimating available bandwidth of network link using time stamp function of internet control message protocol| US7885180B2|2006-12-15|2011-02-08|Check Point Software Technologies Inc.|Address resolution request mirroring| CN103840998A|2007-01-17|2014-06-04|北方电讯网络有限公司|Border gateway protocol extended community attribute for layer-2 and layer-3 virtual private networks using 802.1ah-based tunnels| US8365018B2|2007-06-19|2013-01-29|Sand Holdings, Llc|Systems, devices, agents and methods for monitoring and automatic reboot and restoration of computers, local area networks, wireless access points, modems and other hardware| US8094659B1|2007-07-09|2012-01-10|Marvell Israel Ltd.|Policy-based virtual routing and forwarding assignment| US8560634B2|2007-10-17|2013-10-15|Dispersive Networks, Inc.|Apparatus, systems and methods utilizing dispersive networking| US8667095B2|2007-11-09|2014-03-04|Cisco Technology, Inc.|Local auto-configuration of network devices connected to multipoint virtual connections| US8503334B2|2007-12-14|2013-08-06|Level 3 Communications, Llc|System and method for providing network services over shared virtual private network | GB2458154B|2008-03-07|2012-06-27|Hewlett Packard Development Co|Routing across a virtual network| US8681709B2|2008-03-27|2014-03-25|At&T Mobility Ii Llc|Dynamic allocation of communications resources| US8964548B1|2008-04-17|2015-02-24|Narus, Inc.|System and method for determining network application signatures using flow payloads| US8001413B2|2008-05-05|2011-08-16|Microsoft Corporation|Managing cluster split-brain in datacenter service site failover| US10057775B2|2009-01-28|2018-08-21|Headwater Research Llc|Virtualized policy and charging system| US8893009B2|2009-01-28|2014-11-18|Headwater Partners I Llc|End user device that secures an association of application to service policy with an application certificate check| US8160063B2|2008-06-09|2012-04-17|Microsoft Corporation|Data center interconnect and traffic engineering| US8125907B2|2008-06-12|2012-02-28|Talari Networks Incorporated|Flow-based adaptive private network with multiple WAN-paths| US7962458B2|2008-06-12|2011-06-14|Gravic, Inc.|Method for replicating explicit locks in a data replication engine| US10348571B2|2009-06-11|2019-07-09|Talari Networks, Inc.|Methods and apparatus for accessing dynamic routing information from networks coupled to a wide area network to determine optimized end-to-end routing paths| US9717021B2|2008-07-03|2017-07-25|Silver Peak Systems, Inc.|Virtual network overlay| US8098663B2|2008-07-08|2012-01-17|Cisco Technology, Inc.|Carrier's carrier without customer-edge-to-customer-edge border gateway protocol| US8243589B1|2008-08-14|2012-08-14|United Services Automobile Association |Systems and methods for data center load balancing| JP5074327B2|2008-08-21|2012-11-14|株式会社日立製作所|Routing system| US9715401B2|2008-09-15|2017-07-25|International Business Machines Corporation|Securing live migration of a virtual machine from a secure virtualized computing environment, over an unsecured network, to a different virtualized computing environment| US8006129B2|2008-10-03|2011-08-23|Cisco Technology, Inc.|Detecting and preventing the split-brain condition in redundant processing units| US7978612B2|2008-10-13|2011-07-12|Cisco Technology, Inc.|Two-hop relay for reducing distance vector routing information| CA3005641C|2015-11-19|2021-10-12|Teloip Inc.|System, apparatus and method for providing a virtual network edge and overlay with virtual control plane| US8155158B2|2008-11-12|2012-04-10|Patricio Humberto Saavedra|System, apparatus and method for providing aggregated network connections| US9929964B2|2008-11-12|2018-03-27|Teloip Inc.|System, apparatus and method for providing aggregation of connections with a secure and trusted virtual network overlay| US8094575B1|2009-03-24|2012-01-10|Juniper Networks, Inc.|Routing protocol extension for network acceleration service-aware path selection within computer networks| US8582502B2|2009-06-04|2013-11-12|Empire Technology Development Llc|Robust multipath routing| US8199753B2|2009-06-05|2012-06-12|Juniper Networks, Inc.|Forwarding frames in a computer network using shortest path bridging| US9210065B2|2009-06-22|2015-12-08|Alcatel Lucent|Providing cloud-based services using dynamic network virtualization| US8489744B2|2009-06-29|2013-07-16|Red Hat Israel, Ltd.|Selecting a host from a host cluster for live migration of a virtual machine| US8885667B2|2009-07-10|2014-11-11|Time Warner Cable Enterprises Llc|Destination based methodology for managing network resources| JP5398410B2|2009-08-10|2014-01-29|アラクサラネットワークス株式会社|Network system, packet transfer apparatus, packet transfer method, and computer program| US8346808B2|2009-08-12|2013-01-01|Westell Technologies, Inc.|System and method of accessing resources in a computer network| CN102025591B|2009-09-18|2013-12-18|中兴通讯股份有限公司|Method and system for implementing virtual private network| US8619779B2|2009-09-30|2013-12-31|Alcatel Lucent|Scalable architecture for enterprise extension in a cloud topology| US8260830B2|2009-11-09|2012-09-04|Middlecamp William J|Adapting a timer bounded arbitration protocol| US8488491B2|2009-11-12|2013-07-16|Cisco Technology, Inc.|Compressed virtual routing and forwarding in a communications network| JP5392049B2|2009-12-11|2014-01-22|富士通株式会社|Route control method, communication system, and communication apparatus| US8422379B2|2009-12-15|2013-04-16|At&T Intellectual Property I, Lp|Method of simple and efficient failure resilient load balancing| US20110153909A1|2009-12-22|2011-06-23|Yao Zu Dong|Efficient Nested Virtualization| US8224971B1|2009-12-28|2012-07-17|Amazon Technologies, Inc.|Using virtual networking devices and routing information to initiate external actions| US8452932B2|2010-01-06|2013-05-28|Storsimple, Inc.|System and method for efficiently creating off-site data volume back-ups| US9378473B2|2010-02-17|2016-06-28|Alexander Wolfe|Content and application delivery network aggregation| EP2545682A4|2010-03-10|2017-01-04|Telefonaktiebolaget LM Ericsson |Sub-path e2e probing| US8724456B1|2010-05-19|2014-05-13|Juniper Networks, Inc.|Network path selection for multi-homed edges to ensure end-to-end resiliency| US8799504B2|2010-07-02|2014-08-05|Netgear, Inc.|System and method of TCP tunneling| US8705530B2|2010-07-29|2014-04-22|At&T Intellectual Property I, L.P.|Methods and apparatus to implement multipoint and replicated communication paths using upstream and recursive downstream label mappings| EP2601757B1|2010-08-05|2017-10-04|Thomson Licensing|Method and apparatus for converting a multicast session to a unicast session| US9069727B2|2011-08-12|2015-06-30|Talari Networks Incorporated|Adaptive private network with geographically redundant network control nodes| US9323561B2|2010-08-13|2016-04-26|International Business Machines Corporation|Calibrating cloud computing environments| EP2619949A1|2010-09-24|2013-07-31|BAE Systems Plc.|Admission control in a self aware network| US8589558B2|2010-11-29|2013-11-19|Radware, Ltd.|Method and system for efficient deployment of web applications in a multi-datacenter system| US8385225B1|2010-12-14|2013-02-26|Google Inc.|Estimating round trip time of a network path| US9031059B2|2010-12-17|2015-05-12|Verizon Patent And Licensing Inc.|Fixed mobile convergence and voice call continuity using a mobile device/docking station| US9009217B1|2011-01-06|2015-04-14|Amazon Technologies, Inc.|Interaction with a virtual network| US8806482B1|2011-01-06|2014-08-12|Amazon Technologies, Inc.|Interaction with a virtual network| US9135037B1|2011-01-13|2015-09-15|Google Inc.|Virtual network protocol| US8462780B2|2011-03-30|2013-06-11|Amazon Technologies, Inc.|Offload device-based stateless packet processing| US8774213B2|2011-03-30|2014-07-08|Amazon Technologies, Inc.|Frameworks and interfaces for offload device-based packet processing| US8661295B1|2011-03-31|2014-02-25|Amazon Technologies, Inc.|Monitoring and detecting causes of failures of network paths| WO2012154506A1|2011-05-06|2012-11-15|Interdigital Patent Holdings, Inc.|Method and apparatus for bandwidth aggregation for ip flow| US8873398B2|2011-05-23|2014-10-28|Telefonaktiebolaget L M Ericsson |Implementing EPC in a cloud computer with openflow data plane| US9154327B1|2011-05-27|2015-10-06|Cisco Technology, Inc.|User-configured on-demand virtual layer-2 network for infrastructure-as-a-service on a hybrid cloud network| WO2012167184A2|2011-06-02|2012-12-06|Interdigital Patent Holdings, Inc.|Methods, apparatus, and systems for managing converged gateway communications| US9304798B2|2011-06-07|2016-04-05|Hewlett Packard Enterprise Development Lp|Scalable multi-tenant network architecture for virtualized datacenters| US8804745B1|2011-06-27|2014-08-12|Amazon Technologies, Inc.|Virtualization mapping| US9100305B2|2011-07-12|2015-08-04|Cisco Technology, Inc.|Efficient admission control for low power and lossy networks| AU2012296329B2|2011-08-17|2015-08-27|Nicira, Inc.|Logical L3 routing| US10091028B2|2011-08-17|2018-10-02|Nicira, Inc.|Hierarchical controller clusters for interconnecting two or more logical datapath sets| US10044678B2|2011-08-31|2018-08-07|At&T Intellectual Property I, L.P.|Methods and apparatus to configure virtual private mobile networks with virtual private networks| CN102377630A|2011-10-13|2012-03-14|华为技术有限公司|Traffic engineering tunnel-based virtual private network implementation method and traffic engineering tunnel-based virtual private network implementation system| US20130103834A1|2011-10-21|2013-04-25|Blue Coat Systems, Inc.|Multi-Tenant NATting for Segregating Traffic Through a Cloud Service| US8745177B1|2011-11-01|2014-06-03|Edgecast Networks, Inc.|End-to-end monitoring and optimization of a content delivery network using anycast routing| US8756453B2|2011-11-15|2014-06-17|International Business Machines Corporation|Communication system with diagnostic capabilities| US8874974B2|2011-11-15|2014-10-28|International Business Machines Corporation|Synchronizing a distributed communication system using diagnostic heartbeating| US8769089B2|2011-11-15|2014-07-01|International Business Machines Corporation|Distributed application using diagnostic heartbeating| US20130142201A1|2011-12-02|2013-06-06|Microsoft Corporation|Connecting on-premise networks with public clouds| CN103166824B|2011-12-13|2016-09-28|华为技术有限公司|A kind of interconnected method, device and system| CN102413061B|2011-12-31|2015-04-15|杭州华三通信技术有限公司|Message transmission method and equipment| US8908698B2|2012-01-13|2014-12-09|Cisco Technology, Inc.|System and method for managing site-to-site VPNs of a cloud managed network| US9106555B2|2012-01-25|2015-08-11|Cisco Technology, Inc.|Troubleshooting routing topology based on a reference topology| US8660129B1|2012-02-02|2014-02-25|Cisco Technology, Inc.|Fully distributed routing over a user-configured on-demand virtual network for infrastructure-as-a-service on hybrid cloud networks| US20130238782A1|2012-03-09|2013-09-12|Alcatel-Lucent Usa Inc.|Method and apparatus for identifying an application associated with an ip flow using dns data| US8730793B2|2012-03-16|2014-05-20|Avaya Inc.|Method and apparatus providing network redundancy and high availability to remote network nodes| US8892936B2|2012-03-20|2014-11-18|Symantec Corporation|Cluster wide consistent detection of interconnect failures| US8885562B2|2012-03-28|2014-11-11|Telefonaktiebolaget L M Ericsson |Inter-chassis redundancy with coordinated traffic direction| US8856339B2|2012-04-04|2014-10-07|Cisco Technology, Inc.|Automatically scaled network overlay with heuristic monitoring in a hybrid cloud environment| US9203784B2|2012-04-24|2015-12-01|Cisco Technology, Inc.|Distributed virtual switch architecture for a hybrid cloud| US9071541B2|2012-04-25|2015-06-30|Juniper Networks, Inc.|Path weighted equal-cost multipath| US9389920B2|2012-05-02|2016-07-12|Futurewei Technologies, Inc.|Intelligent data center cluster selection| US9054999B2|2012-05-09|2015-06-09|International Business Machines Corporation|Static TRILL routing| US9379971B2|2012-05-11|2016-06-28|Simula Inovation AS|Method and apparatus for determining paths between source/destination pairs| KR102043071B1|2012-05-15|2019-11-11|텔레폰악티에볼라겟엘엠에릭슨|Methods and apparatus for detecting and handling split brain issues in a link aggregation group| TWI482469B|2012-05-23|2015-04-21|Gemtek Technology Co Ltd|Routing device| US10063443B2|2012-05-29|2018-08-28|Openet Telecom Ltd.|System and method for managing VoLTE session continuity information using logical scalable units| US9898317B2|2012-06-06|2018-02-20|Juniper Networks, Inc.|Physical path determination for virtual network packet flows| US8953441B2|2012-06-06|2015-02-10|Juniper Networks, Inc.|Re-routing network traffic after link failure| US9729424B2|2012-06-11|2017-08-08|Futurewei Technologies, Inc.|Defining data flow paths in software-defined networks with application-layer traffic optimization| US9647938B2|2012-06-11|2017-05-09|Radware, Ltd.|Techniques for providing value-added services in SDN-based networks| US10031782B2|2012-06-26|2018-07-24|Juniper Networks, Inc.|Distributed processing of network device tasks| US9819658B2|2012-07-12|2017-11-14|Unisys Corporation|Virtual gateways for isolating virtual machines| CN104094577B|2012-08-13|2017-07-04|统一有限责任两合公司|Method and apparatus for evaluating the state of mobile body indirectly| US9210079B2|2012-08-14|2015-12-08|Vmware, Inc.|Method and system for virtual and physical network integration| US9563480B2|2012-08-21|2017-02-07|Rackspace Us, Inc.|Multi-level cloud computing system| US9331940B2|2012-08-28|2016-05-03|Alcatel Lucent|System and method providing distributed virtual routing and switching | CN103227757B|2012-08-31|2016-12-28|杭州华三通信技术有限公司|A kind of message forwarding method and equipment| US9807613B2|2012-09-06|2017-10-31|Dell Products, Lp|Collaborative method and system to improve carrier network policies with context aware radio communication management| EP2901625B1|2012-09-28|2020-04-08|Cornell University|System and methods for improved network routing| CN103731407B|2012-10-12|2017-08-11|华为技术有限公司|The method and system of IKE message negotiations| CN105190557B|2012-10-16|2018-09-14|思杰系统有限公司|For by multistage API set in the public system and method bridged between private clound| US20140112171A1|2012-10-23|2014-04-24|Mr. Babak PASDAR|Network system and method for improving routing capability| US9223635B2|2012-10-28|2015-12-29|Citrix Systems, Inc.|Network offering in cloud computing environment| US9930011B1|2012-11-30|2018-03-27|United Services Automobile Association |Private network request forwarding| EP2760158B1|2012-12-03|2017-02-15|Huawei Technologies Co., Ltd.|Policy processing method and network device| US9417922B2|2012-12-03|2016-08-16|Cutting Edge Consulting Associates, Inc.|Systems and methods for protecting an identity in network communications| KR20140073243A|2012-12-06|2014-06-16|삼성전자주식회사|Apparatus and method for processing http massage| US9338225B2|2012-12-06|2016-05-10|A10 Networks, Inc.|Forwarding policies on a virtual service network| EP2929717A4|2012-12-07|2016-07-20|Hewlett Packard Entpr Dev Lp|Network resource management| US9055000B1|2012-12-17|2015-06-09|Juniper Networks, Inc.|Distributed network subnet| US9515899B2|2012-12-19|2016-12-06|Veritas Technologies Llc|Providing optimized quality of service to prioritized virtual machines and applications based on quality of shared resources| US9621460B2|2013-01-14|2017-04-11|Versa Networks, Inc.|Connecting multiple customer sites over a wide area network using an overlay network| JP6024474B2|2013-01-23|2016-11-16|富士通株式会社|Multi-tenant system, management apparatus, management program, and control method of multi-tenant system| US9060025B2|2013-02-05|2015-06-16|Fortinet, Inc.|Cloud-based security policy configuration| WO2014121460A1|2013-02-06|2014-08-14|华为技术有限公司|Method, device and routing system for data transmission of network virtualization| US10348767B1|2013-02-26|2019-07-09|Zentera Systems, Inc.|Cloud over IP session layer network| US9525564B2|2013-02-26|2016-12-20|Zentera Systems, Inc.|Secure virtual network platform for enterprise hybrid cloud computing environments| US9699034B2|2013-02-26|2017-07-04|Zentera Systems, Inc.|Secure cloud fabric to connect subnets in different network domains| US9065734B2|2013-03-08|2015-06-23|Telefonaktiebolaget L M Ericsson |Network bandwidth allocation in multi-tenancy cloud computing networks| US9306949B1|2013-03-12|2016-04-05|Amazon Technologies, Inc.|Configure interconnections between networks hosted in datacenters| US20140269690A1|2013-03-13|2014-09-18|Qualcomm Incorporated|Network element with distributed flow tables| US9832205B2|2013-03-15|2017-11-28|International Business Machines Corporation|Cross provider security management functionality within a cloud service brokerage platform| US9354983B1|2013-03-15|2016-05-31|Entreda, Inc.|Integrated it service provisioning and management| US9483286B2|2013-03-15|2016-11-01|Avi Networks|Distributed network services| US9628328B2|2013-03-15|2017-04-18|Rackspace Us, Inc.|Network controller with integrated resource management capability| US9075771B1|2013-03-15|2015-07-07|Symantec Corporation|Techniques for managing disaster recovery sites| US9450817B1|2013-03-15|2016-09-20|Juniper Networks, Inc.|Software defined network controller| US10263848B2|2013-03-20|2019-04-16|Wolting Holding B.V.|Compiler for and method for software defined networks| US9432245B1|2013-04-16|2016-08-30|Amazon Technologies, Inc.|Distributed load balancer node architecture| CN104219147B|2013-06-05|2018-10-16|中兴通讯股份有限公司|The VPN of edge device realizes processing method and processing device| US9471356B2|2013-06-12|2016-10-18|Dell Products L.P.|Systems and methods for providing VLAN-independent gateways in a network virtualization overlay implementation| US9264289B2|2013-06-27|2016-02-16|Microsoft Technology Licensing, Llc|Endpoint data centers of different tenancy sets| US9608962B1|2013-07-09|2017-03-28|Pulse Secure, Llc|Application-aware connection for network access client| US10749711B2|2013-07-10|2020-08-18|Nicira, Inc.|Network-link method useful for a last-mile connectivity in an edge-gateway multipath system| US10454714B2|2013-07-10|2019-10-22|Nicira, Inc.|Method and system of overlay flow control| US9722815B2|2013-07-10|2017-08-01|Sunil Mukundan|Edge-gateway multipath method and system| US10003536B2|2013-07-25|2018-06-19|Grigore Raileanu|System and method for managing bandwidth usage rates in a packet-switched network| US9979622B2|2013-07-30|2018-05-22|Cisco Technology, Inc.|Elastic WAN optimization cloud services| US9203781B2|2013-08-07|2015-12-01|Cisco Technology, Inc.|Extending virtual station interface discovery protocol and VDP-like protocols for dual-homed deployments in data center environments| US9641551B1|2013-08-13|2017-05-02|vIPtela Inc.|System and method for traversing a NAT device with IPSEC AH authentication| US9311140B2|2013-08-13|2016-04-12|Vmware, Inc.|Method and apparatus for extending local area networks between clouds and migrating virtual machines using static network addresses| US9338223B2|2013-08-14|2016-05-10|Verizon Patent And Licensing Inc.|Private cloud topology management system| WO2015021629A1|2013-08-15|2015-02-19|华为技术有限公司|Resource allocation method| US20150089628A1|2013-09-24|2015-03-26|Michael Lang|System and Method for Provision of a Router / Firewall in a Network| US20150088942A1|2013-09-25|2015-03-26|Westell Technologies, Inc.|Methods and Systems for Providing File Services| US9379981B1|2013-09-27|2016-06-28|Google Inc.|Flow level dynamic load balancing| US9461969B2|2013-10-01|2016-10-04|Racemi, Inc.|Migration of complex applications within a hybrid cloud environment| US9635580B2|2013-10-08|2017-04-25|Alef Mobitech Inc.|Systems and methods for providing mobility aspects to applications in the cloud| US9397946B1|2013-11-05|2016-07-19|Cisco Technology, Inc.|Forwarding to clusters of service nodes| JP2015095784A|2013-11-12|2015-05-18|富士通株式会社|Information processing system, control method for information processing system, and control program for information processor| US9912582B2|2013-11-18|2018-03-06|Telefonaktiebolaget Lm Ericsson |Multi-tenant isolation in a cloud environment using software defined networking| US9231871B2|2013-11-25|2016-01-05|Versa Networks, Inc.|Flow distribution table for packet flow load balancing| US9813343B2|2013-12-03|2017-11-07|Akamai Technologies, Inc.|Virtual private network -as-a-service with load-balanced tunnel endpoints| TWI528755B|2013-12-06|2016-04-01|財團法人工業技術研究院|A controller for delay measurement, a delay measurement system and a delay measurement method in sdn| US9461923B2|2013-12-06|2016-10-04|Algoblu Holdings Limited|Performance-based routing in software-defined network | US9288135B2|2013-12-13|2016-03-15|International Business Machines Corporation|Managing data flows in software-defined network using network interface card| US9467478B1|2013-12-18|2016-10-11|vIPtela Inc.|Overlay management protocol for secure routing based on an overlay network| US20150189009A1|2013-12-30|2015-07-02|Alcatel-Lucent Canada Inc.|Distributed multi-level stateless load balancing| US9450852B1|2014-01-03|2016-09-20|Juniper Networks, Inc.|Systems and methods for preventing split-brain scenarios in high-availability clusters| US10097372B2|2014-01-09|2018-10-09|Ciena Corporation|Method for resource optimized network virtualization overlay transport in virtualized data center environments| JP6328432B2|2014-01-16|2018-05-23|クラリオン株式会社|Gateway device, file server system, and file distribution method| US9436813B2|2014-02-03|2016-09-06|Ca, Inc.|Multi-tenancy support for a product that does not support multi-tenancy| US9825822B1|2014-02-13|2017-11-21|Amazon Technologies, Inc.|Group networking in an overlay network| US20150236962A1|2014-02-14|2015-08-20|Exinda Networks PTY, Ltd. of Australia|Method and system for using dynamic bandwidth detection to drive quality of service control refinement| US8989199B1|2014-02-24|2015-03-24|Level 3 Communications, Llc|Control device discovery in networks having separate control and forwarding devices| WO2015133327A1|2014-03-07|2015-09-11|日本電気株式会社|Network system, inter-site network cooperation control device, network control method, and program| US9479424B2|2014-03-18|2016-10-25|Telefonaktiebolaget Lm Ericsson |Optimized approach to IS-IS LFA computation with parallel links| US10476698B2|2014-03-20|2019-11-12|Avago Technologies International Sales Pte. Limited|Redundent virtual link aggregation group| US9647883B2|2014-03-21|2017-05-09|Nicria, Inc.|Multiple levels of logical routers| US9787559B1|2014-03-28|2017-10-10|Juniper Networks, Inc.|End-to-end monitoring of overlay networks providing virtualized network services| US9807004B2|2014-04-01|2017-10-31|Google Inc.|System and method for software defined routing of traffic within and between autonomous systems with enhanced flow routing, scalability and security| US9407541B2|2014-04-24|2016-08-02|International Business Machines Corporation|Propagating a flow policy by control packet in a software defined network based network| WO2015171469A1|2014-05-04|2015-11-12|Midfin Systems Inc.|Constructing and operating high-performance unified compute infrastructure across geo-distributed datacenters| US9961545B2|2014-06-03|2018-05-01|Qualcomm Incorporated|Systems, methods, and apparatus for authentication during fast initial link setup| US10062045B2|2014-06-12|2018-08-28|International Business Machines Corporation|Project workspace prioritization| US9350710B2|2014-06-20|2016-05-24|Zscaler, Inc.|Intelligent, cloud-based global virtual private network systems and methods| US10019278B2|2014-06-22|2018-07-10|Cisco Technology, Inc.|Framework for network technology agnostic multi-cloud elastic extension and isolation| US10075329B2|2014-06-25|2018-09-11|A 10 Networks, Incorporated|Customizable high availability switchover control of application delivery controllers| US9634936B2|2014-06-30|2017-04-25|Juniper Networks, Inc.|Service chaining across multiple networks| CN105323136B|2014-07-31|2020-01-10|中兴通讯股份有限公司|Information processing method and device| US20160035183A1|2014-07-31|2016-02-04|Wms Gaming Inc.|Electronic gaming machine service bus| EP3175647B1|2014-08-03|2018-12-12|Hughes Network Systems, LLC|Centralized ground-based route determination and traffic engineering for software defined satellite communications networks| US10609159B2|2014-08-04|2020-03-31|Microsoft Technology Licensing, Llc|Providing higher workload resiliency in clustered systems based on health heuristics| US9356943B1|2014-08-07|2016-05-31|Symantec Corporation|Systems and methods for performing security analyses on network traffic in cloud-based environments| US9665432B2|2014-08-07|2017-05-30|Microsoft Technology Licensing, Llc|Safe data access following storage failure| US9336040B2|2014-09-15|2016-05-10|Intel Corporation|Techniques for remapping sessions for a multi-threaded application| US9742626B2|2014-09-16|2017-08-22|CloudGenix, Inc.|Methods and systems for multi-tenant controller based mapping of device identity to network level identity| US10038601B1|2014-09-26|2018-07-31|Amazon Technologies, Inc.|Monitoring a multi-tier network fabric| US9723065B2|2014-10-13|2017-08-01|Vmware, Inc.|Cross-cloud object mapping for hybrid clouds| US9825905B2|2014-10-13|2017-11-21|Vmware Inc.|Central namespace controller for multi-tenant cloud environments| JP6721166B2|2014-10-14|2020-07-08|ミド ホールディングス リミテッド|System and method for distributed flow state P2P configuration in virtual networks| EP3215939B1|2014-11-07|2019-04-24|British Telecommunications public limited company|Method and device for secure communication with shared cloud services| US9590902B2|2014-11-10|2017-03-07|Juniper Networks, Inc.|Signaling aliasing capability in data centers| US9930013B2|2014-11-14|2018-03-27|Cisco Technology, Inc.|Control of out-of-band multipath connections| US9602544B2|2014-12-05|2017-03-21|Viasat, Inc.|Methods and apparatus for providing a secure overlay network between clouds| US9560018B2|2014-12-08|2017-01-31|Cisco Technology, Inc.|Autonomic locator/identifier separation protocol for secure hybrid cloud extension| US9747249B2|2014-12-29|2017-08-29|Nicira, Inc.|Methods and systems to achieve multi-tenancy in RDMA over converged Ethernet| US9787573B2|2014-12-31|2017-10-10|Juniper Networks, Inc.|Fast convergence on link failure in multi-homed Ethernet virtual private networks| US20160198003A1|2015-01-02|2016-07-07|Siegfried Luft|Architecture and method for sharing dedicated public cloud connectivity| US20160197835A1|2015-01-02|2016-07-07|Siegfried Luft|Architecture and method for virtualization of cloud networking components| US20160197834A1|2015-01-02|2016-07-07|Siegfried Luft|Architecture and method for traffic engineering between diverse cloud providers| WO2016112484A1|2015-01-12|2016-07-21|Telefonaktiebolaget Lm Ericsson |Method and apparatus for router maintenance| US10061664B2|2015-01-15|2018-08-28|Cisco Technology, Inc.|High availability and failover| US9819565B2|2015-01-26|2017-11-14|Ciena Corporation|Dynamic policy engine for multi-layer network management| CN104639639B|2015-02-09|2018-04-27|华为技术有限公司|A kind of method of adjustment of deploying virtual machine position, apparatus and system| US20160255169A1|2015-02-27|2016-09-01|Futurewei Technologies, Inc.|Method and system for smart object eviction for proxy cache| CN105991430B|2015-03-05|2022-01-14|李明|Data routing across multiple autonomous network systems| US9628380B2|2015-03-06|2017-04-18|Telefonaktiebolaget L M Ericsson |Method and system for routing a network function chain| US10425382B2|2015-04-13|2019-09-24|Nicira, Inc.|Method and system of a cloud-based multipath routing protocol| US10135789B2|2015-04-13|2018-11-20|Nicira, Inc.|Method and system of establishing a virtual private network in a cloud service for branch networking| US10498652B2|2015-04-13|2019-12-03|Nicira, Inc.|Method and system of application-aware routing with crowdsourcing| US9948552B2|2015-04-17|2018-04-17|Equinix, Inc.|Cloud-based services exchange| US9848041B2|2015-05-01|2017-12-19|Amazon Technologies, Inc.|Automatic scaling of resource instance groups within compute clusters| US20170214701A1|2016-01-24|2017-07-27|Syed Kamran Hasan|Computer security based on artificial intelligence| US10834054B2|2015-05-27|2020-11-10|Ping Identity Corporation|Systems and methods for API routing and security| US9729348B2|2015-06-04|2017-08-08|Cisco Technology, Inc.|Tunnel-in-tunnel source address correction| US10397277B2|2015-06-14|2019-08-27|Avocado Systems Inc.|Dynamic data socket descriptor mirroring mechanism and use for security analytics| US20160380886A1|2015-06-25|2016-12-29|Ciena Corporation|Distributed data center architecture| US9787641B2|2015-06-30|2017-10-10|Nicira, Inc.|Firewall rule management| US10797992B2|2015-07-07|2020-10-06|Cisco Technology, Inc.|Intelligent wide area network | US9462010B1|2015-07-07|2016-10-04|Accenture Global Services Limited|Threat assessment level determination and remediation for a cloud-based multi-layer security architecture| US10397283B2|2015-07-15|2019-08-27|Oracle International Corporation|Using symmetric and asymmetric flow response paths from an autonomous system| US10050951B2|2015-07-20|2018-08-14|Cisco Technology, Inc.|Secure access to virtual machines in heterogeneous cloud environments| US10637889B2|2015-07-23|2020-04-28|Cisco Technology, Inc.|Systems, methods, and devices for smart mapping and VPN policy enforcement| US10298489B2|2015-07-24|2019-05-21|International Business Machines Corporation|Adding multi-tenant awareness to a network packet processing device on a software defined network | US9942131B2|2015-07-29|2018-04-10|International Business Machines Corporation|Multipathing using flow tunneling through bound overlay virtual machines| US10567347B2|2015-07-31|2020-02-18|Nicira, Inc.|Distributed tunneling for VPN| WO2017022791A1|2015-08-04|2017-02-09|日本電気株式会社|Communication system, communication apparatus, communication method, terminal, and program| US9763054B2|2015-08-19|2017-09-12|Locix Inc.|Systems and methods for determining locations of wireless sensor nodes in a tree network architecture having mesh-based features| US10198724B2|2015-08-21|2019-02-05|Mastercard International Incorporated|Payment networks and methods for facilitating data transfers within payment networks| US9906561B2|2015-08-28|2018-02-27|Nicira, Inc.|Performing logical segmentation based on remote device attributes| US10547540B2|2015-08-29|2020-01-28|Vmware, Inc.|Routing optimization for inter-cloud connectivity| US10225331B1|2015-09-23|2019-03-05|EMC IP Holding Company LLC|Network address translation load balancing over multiple internet protocol addresses| CN108886825B|2015-09-23|2022-02-18|谷歌有限责任公司|Distributed software defined radio packet core system| CN107079035B|2015-09-25|2020-05-19|优倍快公司|Compact and integrated key controller device for monitoring a network| US10229017B1|2015-10-01|2019-03-12|EMC IP Holding Company LLC|Resetting fibre channel devices for failover in high availability backup systems| US10067780B2|2015-10-06|2018-09-04|Cisco Technology, Inc.|Performance-based public cloud selection for a hybrid cloud environment| US10462136B2|2015-10-13|2019-10-29|Cisco Technology, Inc.|Hybrid cloud security groups| US10333897B2|2015-10-23|2019-06-25|Attala Systems Corporation|Distributed firewalls and virtual network services using network packets with security tags| CN106656801B|2015-10-28|2019-11-15|华为技术有限公司|Reorientation method, device and the Business Stream repeater system of the forward-path of Business Stream| US9747179B2|2015-10-29|2017-08-29|Netapp, Inc.|Data management agent for selective storage re-caching| US9916214B2|2015-11-17|2018-03-13|International Business Machines Corporation|Preventing split-brain scenario in a high-availability cluster| US9825911B1|2015-11-18|2017-11-21|Amazon Technologies, Inc.|Security policy check based on communication establishment handshake packet| US9602389B1|2015-11-21|2017-03-21|Naveen Maveli|Method and system for defining logical channels and channel policies in an application acceleration environment| US9843485B2|2015-11-30|2017-12-12|International Business Machines Coprporation|Monitoring dynamic networks| US10257019B2|2015-12-04|2019-04-09|Arista Networks, Inc.|Link aggregation split-brain detection and recovery| US10148756B2|2015-12-09|2018-12-04|At&T Intellectual Property I, L.P.|Latency virtualization in a transport network using a storage area network| US10117285B2|2015-12-16|2018-10-30|Verizon Patent And Licensing Inc.|Cloud WAN overlay network| US10187289B1|2015-12-28|2019-01-22|Amazon Technologies, Inc.|Route advertisement management using tags in directly connected networks| CN108293009B|2015-12-31|2021-05-18|华为技术有限公司|Software defined data center and scheduling method of service cluster in software defined data center| EP3398068B1|2015-12-31|2021-08-11|Microsoft Technology Licensing, LLC|Network redundancy and failure detection| US9866637B2|2016-01-11|2018-01-09|Equinix, Inc.|Distributed edge processing of internet of things device data in co-location facilities| US10367655B2|2016-01-25|2019-07-30|Alibaba Group Holding Limited|Network system and method for connecting a private network with a virtual private network| US10785148B2|2016-02-15|2020-09-22|Telefonaktiebolaget Lm Ericsson |OSPF extensions for flexible path stitchng and selection for traffic transiting segment routing and MPLS networks| US10200278B2|2016-03-02|2019-02-05|Arista Networks, Inc.|Network management system control service for VXLAN on an MLAG domain| US10142163B2|2016-03-07|2018-11-27|Cisco Technology, Inc|BFD over VxLAN on vPC uplinks| WO2017152396A1|2016-03-09|2017-09-14|华为技术有限公司|Flow table processing method and device| CN107533668B|2016-03-11|2021-01-26|慧与发展有限责任合伙企业|Hardware accelerator and method for calculating node values of a neural network| US10158727B1|2016-03-16|2018-12-18|Equinix, Inc.|Service overlay model for a co-location facility| US9942787B1|2016-03-22|2018-04-10|Amazon Technologies, Inc.|Virtual private network connection quality analysis| US10404727B2|2016-03-25|2019-09-03|Cisco Technology, Inc.|Self organizing learning topologies| US9935955B2|2016-03-28|2018-04-03|Zscaler, Inc.|Systems and methods for cloud based unified service discovery and secure availability| US10313241B2|2016-03-28|2019-06-04|Cox Communications, Inc.|Systems and methods for routing internet packets between enterprise network sites| US20170289002A1|2016-03-31|2017-10-05|Mrittika Ganguli|Technologies for deploying dynamic underlay networks in cloud computing infrastructures| CN105827623B|2016-04-26|2019-06-07|山石网科通信技术股份有限公司|Data center systems| US10484515B2|2016-04-29|2019-11-19|Nicira, Inc.|Implementing logical metadata proxy servers in logical networks| EP3447973B1|2016-05-10|2021-04-07|Huawei Technologies Co., Ltd.|Packet switching service recognition method and terminal| US10129177B2|2016-05-23|2018-11-13|Cisco Technology, Inc.|Inter-cloud broker for hybrid cloud networks| US10275325B2|2016-06-17|2019-04-30|Weixu Technology Co., Ltd.|Method of site isolation protection, electronic device and system using the same method| EP3473052A4|2016-06-18|2020-04-22|Clevernet, INC.|Intelligent adaptive transport layer to enhance performance using multiple channels| US10404827B2|2016-06-22|2019-09-03|Cisco Technology, Inc.|Client network information service| US10404788B2|2016-06-30|2019-09-03|Alibaba Group Holding Limited|Express route transmissions between virtual machines and cloud service computing devices| US9888278B2|2016-07-07|2018-02-06|Telefonaktiebolaget Lm Ericsson |Bandwidth and ABR video QoE management based on OTT video providers and devices| US10320664B2|2016-07-21|2019-06-11|Cisco Technology, Inc.|Cloud overlay for operations administration and management| US10567276B2|2016-08-05|2020-02-18|Huawei Technologies Co., Ltd.|Virtual network pre-configuration in support of service-based traffic forwarding| JP6505172B2|2016-08-25|2019-04-24|エヌエイチエヌ エンターテインメント コーポレーションNHN Entertainment Corporation|Method and system for handling load balancing utilizing virtual switches in a virtual network environment| US10193749B2|2016-08-27|2019-01-29|Nicira, Inc.|Managed forwarding element executing in public cloud data compute node without overlay network| US10681131B2|2016-08-29|2020-06-09|Vmware, Inc.|Source network address translation detection and dynamic tunnel creation| US10454758B2|2016-08-31|2019-10-22|Nicira, Inc.|Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP| US10326830B1|2016-09-02|2019-06-18|Amazon Technologies, Inc.|Multipath tunneling to a service offered at several datacenters| US10491531B2|2016-09-13|2019-11-26|Gogo Llc|User directed bandwidth optimization| US10552267B2|2016-09-15|2020-02-04|International Business Machines Corporation|Microcheckpointing with service processor| US10587700B2|2016-09-16|2020-03-10|Oracle International Corporation|Cloud operation reservation system| US10148564B2|2016-09-30|2018-12-04|Juniper Networks, Inc.|Multiple paths computation for label switched paths| US10250498B1|2016-10-03|2019-04-02|Sprint Communications Company L.P.|Session aggregator brokering of data stream communication| US9667619B1|2016-10-14|2017-05-30|Akamai Technologies, Inc.|Systems and methods for utilizing client side authentication to select services available at a given port number| US10778722B2|2016-11-08|2020-09-15|Massachusetts Institute Of Technology|Dynamic flow system| US9906401B1|2016-11-22|2018-02-27|Gigamon Inc.|Network visibility appliances for cloud computing architectures| US10749856B2|2016-11-23|2020-08-18|Ingram Micro, Inc.|System and method for multi-tenant SSO with dynamic attribute retrieval| US10560431B1|2016-12-05|2020-02-11|Amazon Technologies, Inc.|Virtual private gateway for encrypted communication over dedicated physical link| US10826905B2|2016-12-05|2020-11-03|Citrix Systems, Inc.|Secure access to on-premises web services from multi-tenant cloud services| US10868760B2|2016-12-19|2020-12-15|Vmware, Inc.|System and method for managing public IP addresses for virtual data centers| US10887173B2|2016-12-21|2021-01-05|Juniper Networks, Inc.|Communicating state information in distributed operating systems| US10237123B2|2016-12-21|2019-03-19|Nicira, Inc.|Dynamic recovery from a split-brain failure in edge nodes| US10263832B1|2016-12-29|2019-04-16|Juniper Networks, Inc.|Physical interface to virtual interface fault propagation| WO2020018704A1|2018-07-18|2020-01-23|The Mode Group|High performance software-defined core network| US20180219765A1|2017-01-31|2018-08-02|Waltz Networks|Method and Apparatus for Network Traffic Control Optimization| US20200036624A1|2017-01-31|2020-01-30|The Mode Group|High performance software-defined core network| US11252079B2|2017-01-31|2022-02-15|Vmware, Inc.|High performance software-defined core network| US20190238449A1|2017-01-31|2019-08-01|The Mode Group|High performance software-defined core network| US20200106696A1|2017-01-31|2020-04-02|The Mode Group|High performance software-defined core network| US20190280962A1|2017-01-31|2019-09-12|The Mode Group|High performance software-defined core network| US20190280964A1|2017-01-31|2019-09-12|The Mode Group|High performance software-defined core network| US20200021514A1|2017-01-31|2020-01-16|The Mode Group|High performance software-defined core network| US20200296026A1|2017-01-31|2020-09-17|The Mode Group|High performance software-defined core network| US20200021515A1|2017-01-31|2020-01-16|The Mode Group|High performance software-defined core network| US11121962B2|2017-01-31|2021-09-14|Vmware, Inc.|High performance software-defined core network| US10992568B2|2017-01-31|2021-04-27|Vmware, Inc.|High performance software-defined core network| US20190280963A1|2017-01-31|2019-09-12|The Mode Group|High performance software-defined core network| US20190372889A1|2017-01-31|2019-12-05|The Mode Group|High performance software-defined core network| US20180219766A1|2017-01-31|2018-08-02|Waltz Networks|Method and Apparatus for Network Traffic Control Optimization| US10778528B2|2017-02-11|2020-09-15|Nicira, Inc.|Method and system of connecting to a multipath hub in a cluster| US10574528B2|2017-02-11|2020-02-25|Nicira, Inc.|Network multi-source inbound quality of service methods and systems| US10506926B2|2017-02-18|2019-12-17|Arc Devices Limited|Multi-vital sign detector in an electronic medical records system| US11032248B2|2017-03-07|2021-06-08|Nicira, Inc.|Guest thin agent assisted host network encryption| US10503427B2|2017-03-10|2019-12-10|Pure Storage, Inc.|Synchronously replicating datasets and other managed objects to cloud-based storage systems| US9832128B1|2017-03-20|2017-11-28|Engine Media, Llc|Dynamic advertisement routing| WO2018187094A1|2017-04-06|2018-10-11|Common Networks, Inc.|Systems and methods for networking and wirelessly routing communications| US10333836B2|2017-04-13|2019-06-25|Cisco Technology, Inc.|Convergence for EVPN multi-homed networks| US10142226B1|2017-05-24|2018-11-27|Amazon Technologies, Inc.|Direct network connectivity with scalable forwarding and routing fleets| US10382333B2|2017-05-31|2019-08-13|Juniper Networks, Inc.|Fabric path context-based forwarding for virtual nodes| US10476817B2|2017-05-31|2019-11-12|Juniper Networks, Inc.|Transport LSP setup using selected fabric path between virtual nodes| US10432523B2|2017-05-31|2019-10-01|Juniper Networks, Inc.|Routing protocol signaling of multiple next hops and their relationship| US10164873B1|2017-06-01|2018-12-25|Ciena Corporation|All-or-none switchover to address split-brain problems in multi-chassis link aggregation groups| US10523539B2|2017-06-22|2019-12-31|Nicira, Inc.|Method and system of resiliency in cloud-delivered SD-WAN| US10616379B2|2017-06-23|2020-04-07|Futurewei Technologies, Inc.|Seamless mobility and session continuity with TCP mobility option| US10742447B2|2017-07-10|2020-08-11|Level 3 Communications, Llc|Connecting to multiple cloud instances in a telecommunications network| US10742750B2|2017-07-20|2020-08-11|Cisco Technology, Inc.|Managing a distributed network of function execution environments| US20190046056A1|2017-08-10|2019-02-14|VVVital Patent Holdings Limited|Multi-Vital Sign Detector in an Electronic Medical Records System| US20190058709A1|2017-08-16|2019-02-21|Telefonaktiebolaget Lm Ericsson |Tenant management method and system in a cloud computing environment| US10491516B2|2017-08-24|2019-11-26|Nicira, Inc.|Packet communication between logical networks and public cloud service providers native networks using a single network interface and a single routing table| US10778579B2|2017-08-27|2020-09-15|Nicira, Inc.|Performing in-line service in public cloud| US10616085B2|2017-08-31|2020-04-07|Zte Corporation|Residence time measurement for optimizing network services| US10554538B2|2017-09-12|2020-02-04|Adara Networks, Inc.|Dynamic link state routing protocol| US10511546B2|2017-09-29|2019-12-17|Juniper Networks, Inc.|Connecting virtual nodes in a network device using abstract fabric interfaces| US11115480B2|2017-10-02|2021-09-07|Vmware, Inc.|Layer four optimization for a virtual network defined over public cloud| CN113196723A|2018-11-15|2021-07-30|Vm维尔股份有限公司|Layer four optimization in virtual networks defined on public clouds| US10959098B2|2017-10-02|2021-03-23|Vmware, Inc.|Dynamically specifying multiple public cloud edge nodes to connect to an external multi-computer node| US11102032B2|2017-10-02|2021-08-24|Vmware, Inc.|Routing data message flow through multiple public clouds| US10999100B2|2017-10-02|2021-05-04|Vmware, Inc.|Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider| US11089111B2|2017-10-02|2021-08-10|Vmware, Inc.|Layer four optimization for a virtual network defined over public cloud| US10999165B2|2017-10-02|2021-05-04|Vmware, Inc.|Three tiers of SaaS providers for deploying compute and network infrastructure in the public cloud| US11223514B2|2017-11-09|2022-01-11|Nicira, Inc.|Method and system of a dynamic high-availability mode based on current wide area network connectivity| US10565464B2|2017-12-21|2020-02-18|At&T Intellectual Property I, L.P.|Adaptive cloud offloading of mobile augmented reality| US20190207844A1|2018-01-03|2019-07-04|Hewlett Packard Enterprise Development Lp|Determining routing decisions in a software-defined wide area network| US10797910B2|2018-01-26|2020-10-06|Nicira, Inc.|Specifying and utilizing paths through a network| WO2019161206A1|2018-02-19|2019-08-22|Futurewei Technologies, Inc.|Multi-cloud vpc routing and registration| US11102079B2|2018-04-17|2021-08-24|Microsoft Technology Licensing, Llc|Cross-regional virtual network peering| US10999163B2|2018-08-14|2021-05-04|Juniper Networks, Inc.|Multi-cloud virtual computing environment provisioning using a high-level topology description| US11233778B2|2018-08-15|2022-01-25|Juniper Networks, Inc.|Secure forwarding of tenant workloads in virtual networks| US11182209B2|2018-09-21|2021-11-23|Google Llc|Distributed job scheduling system| US11271905B2|2018-09-21|2022-03-08|Google Llc|Network architecture for cloud computing environments| US11102113B2|2018-11-08|2021-08-24|Sap Se|Mapping of internet protocol addresses in a multi-cloud computing environment| US10999197B2|2018-11-30|2021-05-04|Cisco Technology, Inc.|End-to-end identity-aware routing across multiple administrative domains| US11099873B2|2019-01-04|2021-08-24|Microsoft Technology Licensing, Llc|Network configuration updates for virtual machine| US10892989B2|2019-01-18|2021-01-12|Vmware, Inc.|Tunnel-based service insertion in public cloud environments| US11012299B2|2019-01-18|2021-05-18|Cisco Technology, Inc.|Seamless multi-cloud routing and policy interconnectivity| US20200244721A1|2019-01-30|2020-07-30|Hewlett Packard Enterprise Development Lp|Deploying a workload| US10826775B1|2019-06-19|2020-11-03|Cisco Technology, Inc.|Policy plane integration across multiple domains|US8560634B2|2007-10-17|2013-10-15|Dispersive Networks, Inc.|Apparatus, systems and methods utilizing dispersive networking| US10454714B2|2013-07-10|2019-10-22|Nicira, Inc.|Method and system of overlay flow control| US10749711B2|2013-07-10|2020-08-18|Nicira, Inc.|Network-link method useful for a last-mile connectivity in an edge-gateway multipath system| US10135789B2|2015-04-13|2018-11-20|Nicira, Inc.|Method and system of establishing a virtual private network in a cloud service for branch networking| US10498652B2|2015-04-13|2019-12-03|Nicira, Inc.|Method and system of application-aware routing with crowdsourcing| US9942131B2|2015-07-29|2018-04-10|International Business Machines Corporation|Multipathing using flow tunneling through bound overlay virtual machines| US10826805B2|2016-07-11|2020-11-03|Acronis International Gmbh|System and method for dynamic online backup optimization| US10992568B2|2017-01-31|2021-04-27|Vmware, Inc.|High performance software-defined core network| US11252079B2|2017-01-31|2022-02-15|Vmware, Inc.|High performance software-defined core network| US11121962B2|2017-01-31|2021-09-14|Vmware, Inc.|High performance software-defined core network| US10778528B2|2017-02-11|2020-09-15|Nicira, Inc.|Method and system of connecting to a multipath hub in a cluster| US10574528B2|2017-02-11|2020-02-25|Nicira, Inc.|Network multi-source inbound quality of service methods and systems| US10523539B2|2017-06-22|2019-12-31|Nicira, Inc.|Method and system of resiliency in cloud-delivered SD-WAN| US11089111B2|2017-10-02|2021-08-10|Vmware, Inc.|Layer four optimization for a virtual network defined over public cloud| US11115480B2|2017-10-02|2021-09-07|Vmware, Inc.|Layer four optimization for a virtual network defined over public cloud| US10999100B2|2017-10-02|2021-05-04|Vmware, Inc.|Identifying multiple nodes in a virtual network defined over a set of public clouds to connect to an external SAAS provider| US10999165B2|2017-10-02|2021-05-04|Vmware, Inc.|Three tiers of SaaS providers for deploying compute and network infrastructure in the public cloud| US11102032B2|2017-10-02|2021-08-24|Vmware, Inc.|Routing data message flow through multiple public clouds| US10959098B2|2017-10-02|2021-03-23|Vmware, Inc.|Dynamically specifying multiple public cloud edge nodes to connect to an external multi-computer node| US10992558B1|2017-11-06|2021-04-27|Vmware, Inc.|Method and apparatus for distributed data network traffic optimization| US11223514B2|2017-11-09|2022-01-11|Nicira, Inc.|Method and system of a dynamic high-availability mode based on current wide area network connectivity| US10523450B2|2018-02-28|2019-12-31|Oracle International Corporation|Overlay network billing| US11115330B2|2018-03-14|2021-09-07|Juniper Networks, Inc.|Assisted replication with multi-homing and local bias| US10594782B2|2018-06-07|2020-03-17|Level 3 Communications, Llc|Load distribution across superclusters| CN110661896B|2018-06-29|2021-06-22|网宿科技股份有限公司|Method for determining mapping address of data stream and server| US10999220B2|2018-07-05|2021-05-04|Vmware, Inc.|Context aware middlebox services at datacenter edge| US11184327B2|2018-07-05|2021-11-23|Vmware, Inc.|Context aware middlebox services at datacenter edges| US20200067829A1|2018-08-27|2020-02-27|Ca, Inc.|Methods and devices for intelligent selection of channel interfaces| US10708125B1|2018-09-05|2020-07-07|Amazon Technologies, Inc.|Gateway configuration using a network manager| US20200162359A1|2018-11-16|2020-05-21|Citrix Systems, Inc.|Systems and methods for checking compatibility of saas apps for different browsers| US11095558B2|2018-12-28|2021-08-17|Alibaba Group Holding Limited|ASIC for routing a packet| US10659310B1|2019-08-16|2020-05-19|LogicMonitor, Inc.|Discovering and mapping the relationships between macro-clusters of a computer network topology for an executing application| US11018995B2|2019-08-27|2021-05-25|Vmware, Inc.|Alleviating congestion in a virtual network deployed over public clouds for an entity| CN110650092A|2019-09-24|2020-01-03|网易(杭州)网络有限公司|Data processing method and device| US11044190B2|2019-10-28|2021-06-22|Vmware, Inc.|Managing forwarding elements at edge nodes connected to a virtual network| CN110784398A|2019-11-01|2020-02-11|锱云物联网科技有限公司|Data acquisition gateway and data analysis method for industrial Internet of things processing equipment| US10979534B1|2019-11-29|2021-04-13|Amazon Technologies, Inc.|Latency-based placement of cloud compute instances within communications service provider networks| US10965737B1|2019-11-29|2021-03-30|Amazon Technologies, Inc.|Cloud computing in communications service provider networks| CN111404801B|2020-03-27|2021-09-28|四川虹美智能科技有限公司|Data processing method, device and system for cross-cloud manufacturer| US11258713B2|2020-04-06|2022-02-22|Vmware, Inc.|Policy-based proximity routing| CN113965579A|2020-06-29|2022-01-21|华为技术有限公司|Resource distribution method of cloud service and related equipment| US20220006751A1|2020-07-02|2022-01-06|Vmware, Inc.|Methods and apparatus for application aware hub clustering techniques for a hyper scale sd-wan| US10911407B1|2020-08-04|2021-02-02|Palo Alto Networks, Inc.|Localization at scale for a cloud-based security service| US11171878B1|2020-09-21|2021-11-09|Vmware, Inc.|Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways| US11095612B1|2020-10-30|2021-08-17|Palo Alto Networks, Inc.|Flow metadata exchanges between network and security functions for a security service| CN113259174B|2021-06-07|2021-10-19|上海慧捷智能技术有限公司|Contact center system based on multi-cloud architecture and implementation method thereof|
法律状态:
2021-11-23| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762566524P| true| 2017-10-02|2017-10-02| US62/566,524|2017-10-02| US15/972,090|2018-05-04| US15/972,100|2018-05-04| US15/972,090|US10666460B2|2017-10-02|2018-05-04|Measurement based routing through multiple public clouds| US15/972,102|US20190104109A1|2017-10-02|2018-05-04|Deploying firewall for virtual network defined over public cloud infrastructure| US15/972,088|2018-05-04| US15/972,091|US10608844B2|2017-10-02|2018-05-04|Graph based routing through multiple public clouds| US15/972,102|2018-05-04| US15/972,103|US10841131B2|2017-10-02|2018-05-04|Distributed WAN security gateway| US15/972,095|2018-05-04| US15/972,086|US20190104049A1|2017-10-02|2018-05-04|Overlay network encapsulation to forward data message flows through multiple public cloud datacenters| US15/972,093|2018-05-04| US15/972,088|US11102032B2|2017-10-02|2018-05-04|Routing data message flow through multiple public clouds| US15/972,095|US10958479B2|2017-10-02|2018-05-04|Selecting one node from several candidate nodes in several public clouds to establish a virtual network that spans the public clouds| US15/972,093|US10594516B2|2017-10-02|2018-05-04|Virtual network provider| US15/972,083|US11005684B2|2017-10-02|2018-05-04|Creating virtual networks spanning multiple public clouds| US15/972,098|2018-05-04| US15/972,103|2018-05-04| US15/972,104|2018-05-04| US15/972,083|2018-05-04| US15/972,086|2018-05-04| US15/972,100|US10778466B2|2017-10-02|2018-05-04|Processing data messages of a virtual network that are sent to and received from external service machines| US15/972,091|2018-05-04| US15/972,104|US10686625B2|2017-10-02|2018-05-04|Defining and distributing routes for a virtual network| US15/972,098|US10805114B2|2017-10-02|2018-05-04|Processing data messages of a virtual network that are sent to and received from external service machines| PCT/US2018/053811|WO2019070611A1|2017-10-02|2018-10-01|Creating virtual networks spanning multiple public clouds| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|